Behavioral Sciences and the Law Behav. Sci. Law 33: 128–145 (2015) Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/bsl.2157

Progress in Violence Risk Assessment and Communication: Hypothesis versus Evidence Grant T. Harris† and Marnie E. Rice* We draw a distinction between hypothesis and evidence with respect to the assessment and communication of the risk of violent recidivism. We suggest that some authorities in the field have proposed quite valid and reasonable hypotheses with respect to several issues. Among these are the following: that accuracy will be improved by the adjustment or moderation of numerical scores based on clinical opinions about rare risk factors or other considerations pertaining to the applicability to the case at hand; that there is something fundamentally distinct about protective factors so that they are not merely the obverse of risk factors, such that optimal accuracy cannot be achieved without consideration of such protective factors; and that assessment of dynamic factors is required for optimal accuracy and furthermore interventions aimed at such dynamic factors can be expected to cause reductions in violence risk. We suggest here that, while these are generally reasonable hypotheses, they have been inappropriately presented to practitioners as empirically supported facts, and that practitioners’ assessment and communication about violence risk run beyond that supported by the available evidence as a result. We further suggest that this represents harm, especially in impeding scientific progress. Nothing here justifies stasis or simply surrendering to authoritarian custody with somatic treatment. Theoretically motivated and clearly articulated assessment and intervention should be provided for offenders, but in a manner that moves the field more firmly from hypotheses to evidence. Copyright # 2015 John Wiley & Sons, Ltd.

Many know the story of John Snow (often acknowledged as the first epidemiologist), a 19th-century London physician whose careful gathering and analysis of geographical data pinpointed sources of contaminated water that caused cholera in Soho. Perhaps a less well-known aspect of the story is that, at the time, authorities believed that cholera was caused by miasma, the terrible smell associated with human waste and other rotting matter. Certainly at the time (germ theory had not yet been accepted), this was a reasonable hypothesis – for example, the summer often saw worsening miasma and increasing incidence of cholera. The idea led to the construction of London’s first sewers to carry waste to the Thames (upstream of the main water intakes, unfortunately). This intervention worsened contamination of the city’s water supply (compared with the prior use of cesspits), and thousands of deaths resulted (Johnson, 2006). A recent example is the campaign against dietary fat. Years ago, researchers observed that people who experienced heart attacks often had deposits of fatty plaque

*Correspondence to: Marnie E Rice, Waypoint Centre for Mental Health Care, 500 Church Street Penetanguishene, ON, Canada, L9M 1G3. E-mail: [email protected] † Queen’s University and the University of Toronto, Ontario, Canada

Copyright # 2015 John Wiley & Sons, Ltd.

Progress in risk communication

129

obstructing cardiac arteries. A component of this plaque was cholesterol, leading to the reasonable hypothesis that reducing serum cholesterol by reducing dietary cholesterol would decrease heart attacks. Consequently, many of us have been advised by physicians and health-oriented media personalities to eat a low-fat diet, and many prepared foods prominently advertise that they are low in fat. Such advice is still given to Canadians, for example, by their government via Canada’s Food Guide (http://www.hc-sc.gc.ca/fn-an/food-guide-aliment/index-eng.php). However, it now seems likely that there is no relationship between dietary fat and serum cholesterol and that low-fat dietary advice might have had negative consequences. This is because calories must come from somewhere and, when people avoid fat, they tend to consume more sugar, which, it turns out, might be a worse cause of arterial plaque (in most people), as well as other serious health problems (Taubes, 2007). Indeed, the World Health Organization has taken the unusual step of publishing its concerns about sugar (http://www.who.int/nutrition/sugars_public_consultation/ en/) in an effort to reverse this potential harm. Surely, no one could be blamed for hypothesizing that miasma was a modifiable cause of cholera, or that dietary fat was a modifiable causal risk factor for heart attack. The harm resulted from treating hypotheses as though they were facts. In this article, we suggest that the same practice has been occurring in the field of risk assessment, especially with respect to the communication to, and by, practitioners about violence risk – hypotheses are treated as empirical evidence. And we suggest, there have been avoidable harms,1 especially in impeding real empirical progress.

VIOLENCE RISK ASSESSMENT IN THE 21ST CENTURY Recent decades have witnessed considerable progress in measuring and communicating about the risk of violent recidivism by the serious offenders of most concern to public policy – especially violent offenders released from secure custody and/or under community supervision, sex offenders considered for indeterminate dispositions, and perpetrators of domestic violence managed by frontline criminal justice professionals. Early in our careers, there was already a body of knowledge about individual characteristics associated with violent recidivism (e.g., Quinsey et al., 1975a; Quinsey et al., 1975b, and suboptimal aspects of decision-making about violence risk (e.g., Quinsey, 1975; Quinsey & Ambtman, 1979). However, the literature could offer little more than cautionary summaries of the knowledge (e.g., Monahan, 1981; Quinsey, 1984) and decision-makers had little in the way of systematic methods upon which to rely. Moreover, there was no consensus, indeed no constructive ideas at all, about how the risk of violence, even if it could be validly assessed, could be communicated to decisionmakers. Forensic decision-making about violence risk was effectively intuitive and subjective. Contrast that state of affairs with the present. There are several systems for the assessment of the risk of violence with multiple independent replications of accuracy. Most relevant to the present issue, these systems are accompanied by rules or guidelines Others have noted a similar phenomenon in jurisprudence whereby judges decide what is “logical”, ignoring contradictory empirical evidence (Krauss & Scurich, 2013). To the extent they do so due to the bad example provided by practitioners, this would also comprise a serious harm.

1

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

130

G. T. Harris and M. E. Rice

that prescribe how that risk is to be communicated to decision-makers.2 That this work has been largely by psychologists is, we suggest, a signal accomplishment of the discipline. While the reliable and valid measurement of violence risk does not, by itself, reduce it in individual offenders, reduction in violence is impossible without such measurement (and effective communication) – valid measurement of the risk is necessary but not sufficient for a decrease. Also necessary are policies that apportion restrictive and intensive interventions in accordance with assessed risk (Andrews & Bonta, 2010). If secure custody and tight community supervision are to be used to control violent crime, they cannot feasibly be employed for absolutely all offenders. Reductions in violent recidivism will be best achieved when such interventions are reserved for those who represent the greatest risk (Andrews & Bonta, 2010). Clearly, a risk assessment system need not be perfectly accurate for this to be true. Thus, we agree with others in asserting that the valid measurement of violence risk posed by released offenders represents a real and significant advance (Scurich & John, 2012). Some have gone further, asserting that a valid risk assessment must also inform treatment planning and progress. There are several schemes or models, including personal characteristics and postrelease circumstances (communicated to professionals so they can address them in pre-release treatment and post-release intervention), that developers indicate lower the risk of violent criminality. The best of these is, we suggest, the approach thoroughly articulated by Andrews and Bonta (2010) and colleagues. The risk–need–responsivity (RNR) framework requires that resources devoted to violent crime reduction (treatment, rehabilitation, supervision, custody, etc.) be apportioned in accordance with actuarial risk. Indeed, there is evidence (Andrews & Bonta, 2010) that failing to implement this first principle might actually cause avoidable crime by increasing the criminal behavior of those of lowest risk (compared with leaving them alone). The second RNR principle is need – services intended to reduce crime must target characteristics (of people or circumstances) empirically related to the occurrence of crime (called criminogenic needs). If, for example, remorse for past violence were unrelated to violent recidivism, it would make little sense3 to provide programs to make offenders more remorseful. The developers of the RNR framework have provided systems that include assessment of criminogenic needs in the evaluation of and communication about the risk of recidivism. There are also less thoroughly developed schemes that also combine putative needs, risk management targets, and risk-prone circumstances into the assessment of, and communication about, violence risk. Again, most important for present purposes, the output of these processes is a particular form or approach to communication about violence risk – specifically, that if these identified characteristics are addressed, users can assume that the risk of violence will be decreased. The RNR structure also provides some guidance

2 There remains debate as to the circumstances under which an appraisal of violence risk is relevant to particular public policy decisions. There are also other objections to systematic violence risk assessment (e.g., the apparent nomothetic/idiographic distinction). These are, with a few exceptions addressed later, nonempirical matters and beyond the scope of this paper, but the interested reader is referred to more extensive treatment elsewhere (Harris et al., 2015). 3 It is quite possible (but as yet not demonstrated) that targeting a non-criminogenic need is necessary. For example, programs to improve forensic patients’ post-release employment might be more effective if participants suffering from psychotic symptoms also learned skills to enhance their in-program cognitive functioning, even though there is little reason to believe that such symptoms themselves represent criminogenic needs.

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

131

as to the manner in which needs are best addressed – generally, social learning approaches. We suggest that combining established risk (and/or protective factors), putative needs, and risk management targets into the assessment of violence risk, and asserting or implying (i.e., communicating) that targeting such needs will reduce violence represents non-scientific over-reach.4 We discuss two other developments in risk communication for which hypotheses have been taken as evidence, and where advice based on the hypotheses have impeded, rather than promoted, progress. We begin with these two developments and then return to a fuller discussion of criminogenic needs and other aspects of “dynamic” risk.

HYPOTHESIS 1: FINAL RISK RATINGS ARE MORE ACCURATE FOR THE PREDICTION OF RECIDIVISM THAN TOTAL SCORES ON THE STRUCTURED PROFESSIONAL JUDGMENT (SPJ) TOOLS Advice Based on Hypothesis Douglas et al. (2014a) recently published recommendations regarding Version 3 of the Historical-Clinical-Risk Management-20 (HCR-20). Users of Version 3, as in the previous versions, are advised to score the instrument (although in this version, there is no total score as items are scored non-numerically), and are then encouraged to use their professional judgment “to come to summary risk ratings low, moderate, or high risk” (p. 95).

Evidence Douglas et al. (2014) state that “Research on Version 2 and other SPJ instruments has shown that summary risk ratings tend to add incrementally to the sum of presence ratings” (i.e., the total score of the numeric ratings of all items; p. 106). They also add, “(A)lthough this has not been observed in all studies” (p. 106). Our reading of the literature leads us to conclude, contrary to Douglas et al. (2014b) and Heilbrun et al. (2010), that the accuracy of assessments of violence risk are unimproved or worsened (though rarely significantly) by the summary risk ratings compared with untempered raw scores (Belfrage et al., 2012; de Vogel et al., 2004; de Vogel, & de Ruiter, 2006; Desmarais, Nicholls, Wilson, & Brink, 2012; Douglas, Yeomans, & Boer, 2005; Michel et al., 2013; Storey et al., 2013; see also meta-analyses by Guy, 2008 and O’Shea, Mitchell, Picchioni, & Dickens, 2013). We found one study (Douglas, Ogloff, & Hart, 2003) reporting that summary risk ratings improved accuracy over untempered raw totals. We view the finding that summary risk ratings afford no added value to 4 To be clear, we are supportive of RNR because it is well articulated in both theoretical and practical terms. Compared with other approaches in all branches of applied psychology, one rarely encounters anything better grounded theoretically and better instantiated pragmatically. Our overarching point is that, to a substantial degree, RNR remains a set of hypotheses – about what changes wrought by which methods have been demonstrated (in accordance with agreed-upon principles of scientific inference) to have caused reductions in violent criminality. We suggest RNR not be promoted as though the empirical issues have all been settled and, as we attempt to explain in the following, practitioners should explicitly engage in hypothesis testing (as opposed to practice already completely established by evidence) and should incorporate rigorous evaluation.

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

132

G. T. Harris and M. E. Rice

totals as unsurprising. First, reducing a total score that can range from 0 to 40 to a rating with three values almost certainly means information loss. Secondly, the reliability of the trichotomous judgment has been poor (de Vogel et al., 2004; Douglas et al., 2005; Kropp & Hart, 2000), whereas agreement is better on items and totals (e.g., Sutherland et al., 2012). Furthermore, such categories as low, moderate, and high risk have little consensual meaning (Hilton, Carter, Harris, & Sharpe, 2008). It was a reasonable hypothesis to propose that intuitive adjustment (based on idiosyncratic or rare factors) of total scores would improve overall accuracy and communication, but then merely advising practitioners to do so was, we suggest, treating an hypothesis as evidence (in the face of some good reasons to believe the hypothesis was wrong; Grove & Meehl, 1996; Meehl, 1954).

HYPOTHESIS 2: ASSESSING BOTH RISK AND PROTECTIVE FACTORS ENHANCES THE ACCURACY OF VIOLENCE RISK ASSESSMENTS Advice Based on Hypothesis “The complementary use of risk and protective factors has been one of the major advances in violence risk assessment in recent years” (de Vries Robbé, de Vogel, & Douglas, 2013, p. 440), and protective factors have been hailed “a new frontier” (de Ruiter & Nicholls, 2011, p.160).

Evidence In the field of violence risk assessment, there is no evidence that protective factors are anything other than the obverse of risk factors. Thus far, every protective factor could be re-expressed as a risk: self-control versus impulsivity, motivation for treatment versus lack of motivation for treatment, and so on. We are aware of no counter-examples to this observation.5 For example, in a recent review of risk and protective factors for sexual violence, Tharp et al. (2012) found many fewer factors referred to as protective factors than risk factors, but all protective factors identified could be rephrased as risks. The Structured Assessment of Protective Factors for violence risk (SAPROF) has recently been proposed for the assessment of protective factors. The goal of the SAPROF was admirable (and based on a reasonable hypothesis): to encourage clinicians to focus on developing strengths to reduce risk rather than focusing exclusively on reducing negative behaviors. There is evidence that high scores on the SAPROF are related to low probabilities of committing new violent offenses (de Vries Robbé et al., 2011; de Vries Robbé, et al., 2013). There is also evidence that the SAPROF adds to the 5 Note that operationalizations are crucial. The obverse of “history of stable employment” could be either “history of unstable employment” or “no history of stable employment”, but probably not “has been fired.” Or the obverse of “involvement in religious activities” could be “no involvement in religious activities”, but probably not “atheist”. The distinction between bipolar (“prosocial attitudes” vs. “antisocial attitudes”) and unipolar (“alcohol addiction” vs. “no alcohol addiction”) is operational and immaterial to the arguments made here. Protective factors could be operationalized as variables that interact with or moderate the effects of risks (essentially as proposed by Rutter, 1987), but that is not how they are treated in this field and there is no research on this alternative conceptualization in violent recidivism research.

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

133

predictive accuracy of the HCR-20 in the prediction of violent recidivism (de Vries Robbé, et al., 2013). Does this mean that it is essential to consider protective factors to optimize the assessment of violence risk? We suggest there is as yet no reason to conclude this is the case. First, although the addition of the SAPROF to the HCR-20 was significant in some analyses, the reverse was not the case – there was no evidence that the HCR-20 added to the predictive accuracy of the SAPROF. Secondly, even if it had, there would have been no need to conclude that the additional predictive accuracy was because the variables were “protective.” Instead, the obligation of parsimony requires consideration that the SAPROF tapped variables that (if their direction were reversed) added incremental validity to the HCR-20. We emphasize that it is reasonable to hypothesize that there is something fundamentally different about protective as opposed to risk factors for violence, but hypothesis is not evidence and we suggest that, before developers of risk assessments add “protective factors” to instruments (operate as though hypotheses were evidence), there needs to be evidence that such items are not simply the obverse of risk factors and that the addition of such items do indeed afford incremental accuracy (above risk factors).

HYPOTHESIS 3: THE ASSESSMENT OF DYNAMIC FACTORS, AND CHANGE IN THOSE FACTORS, IS CRITICAL FOR ACCURATE ASSESSMENT OF VIOLENCE RISK Advice Based on Hypothesis “The most important advance in offender risk assessment … was the integration of dynamic risk factors (or criminogenic needs) with static risk factors in risk/need instruments” (Bonta, 2007, p.519), and “static risk tools, if used in treatment settings, should be supplemented with dynamic tools that can assess change” (Olver & Wong, 2011, p.124).

Evidence Before we can examine the evidence regarding this advice, we must first define static and dynamic risk factors. Monahan and Skeem (2013), adapting the work of Kraemer et al. (1997), defined four types of risk factors. The first, which they call “fixed markers”, is what we and others have called “static” factors. These are, for all intents and purposes pertaining to violence risk, unchangeable (e.g., being male). It is now clearly established in follow-up research that certain characteristics robustly and reliably discriminate those offenders who repeatedly engage in violence (and the most serious violence) from those who are much less likely to do so. Briefly, these can be summarized as reflecting atypically aggressive and antisocial juvenile conduct, a history of crime and violence as an adult (including failure on conditional release and the variety of offenses), a history of substance abuse, behavioral traits associated with antisocial personality, plus age and sex. The ubiquity and stability of these relationships are so secure that investigators have, using follow-up methods, developed actuarial systems for the risk of violent recidivism. Because items can be selected based on incremental Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

134

G. T. Harris and M. E. Rice

validity, these systems use relatively few items, all reflecting such characteristics. Adhering to specified scoring methods yields large average effects in predicting violent community recidivism (e.g., Harris, Rice, & Quinsey, 2010). A notable aspect of this is that optimal accuracy can be achieved using items that are exclusively historic – all reflecting prior conduct.6 Such systems also have norms identifying assessees’ percentile rank, permitting the first branch (risk) of the RNR framework. Specifically, worthwhile communication permits valid statements about an individual’s violence risk relative to the population. That, together with data on available resources (e.g., places in secure custody, numbers and caseloads of parole officials), allows the most efficient allocation of resources. The other three types of risk factors are often encompassed by the term “dynamic,” inasmuch as they all refer to factors that are, or are hypothesized to be, those that can change. The first of these is what Monahan and Skeem call “variable markers.” These are related to recidivism and change, but are unchangeable by treatment (e.g., age). There is no doubt that age (a variable that exhibits intraindividual change, albeit in one highly predictable direction only, but that cannot be made to change via intervention) is a predictor of recidivism. Most follow-up studies report that age is inversely related to violent recidivism. The tempting and reasonable hypothesis therefrom is that changes in age afford unique predictive value. In fact, however, changes in age (operationalized as sentence duration or length of custody) bear only a small and usually nonsignificant association with violent recidivism (e.g., Rice & Harris, 2014). Moreover, the predictive value of offender age at release is smaller and subsumed by the predictive effect of offenders’ age at first recorded offense (Rice & Harris, 2014). If age is a risk factor but change in age is not, there are three related and non-exclusive possibilities: First, the replicated association between age and violent recidivism largely reflects cohort effects such that, compared with older offenders, those released at younger ages also exhibit worse scores on fixed risk markers (e.g., earlier starting, more prolific, and more persistent history of violent and antisocial conduct). Secondly, simple chronological age is a poor index of the life span changes in antisocial proclivity that do occur. Thirdly, simple chronological aging itself is not much of a cause of anything – the timing and extent of life course changes in aggressive and antisocial behavior are caused by individual differences in life history strategy. Such an individual difference is represented by psychopathy – a condition marked by precocious, prolific, and persistent antisociality and aggression. Because good measures of psychopathy are better indicators of this life course phenomenon and can be gathered before release (indeed, at this point are effectively fixed markers), they largely subsume chronological age in the prediction of violent recidivism. An important implication here is that, even if variable risk markers and variable risk factors for violent recidivism were identified, their predictive value might be subsumed by fixed markers,7 because the fixed marker indicates a highly stable feature that moderates or controls (i.e., causes) the effect of the apparently variable indicator. Again, while change in age might well be hypothesized to afford 6

The best factors reflecting personality are historic based almost entirely on recorded prior conduct. We acknowledge that variable markers (e.g., age) could be conceived as truly dynamic if it were shown that, once changed (i.e., once the offender gets older), that offender’s risk is reduced. However, as shown elsewhere, there is no evidence yet that this is the case (Rice & Harris, 2014). It would be false to conclude that “the power of static predictors shows that people don’t change.” The power of static predictors merely shows that variables measured early in life strongly predict who changes and by how much. A different variable (how long one is offense-free after release) appears to add incremental value (Hanson, Harris, Helmus, & Thornton, 2014). 7

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

135

incremental value (over and above a comprehensive set of static items reflecting life course antisociality) in violence risk assessment, the hypothesis as yet musters insufficient support and should not be treated as evidence. Monahan and Skeem (2013) called the second type of dynamic factor “variable risk factors.” We (and others) have called these “potentially dynamic.” Monahan and Skeem defined these as factors related to recidivism and changeable by intervention (e.g., deviant sexual preferences) but that have not (yet) been shown to alter the likelihood of recidivism when altered by intervention. Although not considered in the Monahan and Skeem (2013) typology, we propose, using the Monahan and Skeem (2013) terminology, that the term “potentially variable risk factors” be used to refer to factors related to recidivism and believed (but not yet actually shown) to be changeable with treatment. The last type of risk factor proposed by Monahan and Skeem (2013) will be discussed later.

A PARTIAL EMPIRICAL ILLUSTRATION As an illustration of some issues here, we examined data on the violent recidivism of violent offenders from previous studies (Rice, Harris, & Lang, 2013). We have not usually reported on various attitude and context variables in our prior studies. Because this data set was itself constituted from several individual follow-up studies, the exact same variables were not available for all cases, but the findings reported in the following represent at least 600 subjects for each relationship. Table 1 shows the association between violent recidivism over approximately 20 years of opportunity (Rice et al., 2013) and variables reflecting the offenders’ attitudes or circumstances, usually during their first post-index offense admission. Two things are immediately apparent from the table. Even with considerable power, some variables forensic clinicians seem to rely upon in forming judgments about violence risk (e.g., insight, denial) actually bore no predictive relationship, while others (generally more empirically supported ones; e.g., unfavorable attitudes towards convention) were considerably larger. The next step was to examine the ability of the statistically significant associations in Table 1 to provide incremental value in predicting violent recidivism over and above the predictive effect of the revised version of the Table 1. Putative dynamic variables and violent recidivism Variable Wanted/interested in treatment Showed remorse for index offense(s) Judged by clinicians to have insight Exhibited denial for past offenses Rationalized offending behavior Was a social isolate Could have made better use of leisure/free time Had unsatisfactory accommodation Had rewarding peer interaction at work/school Had rewarding interactions with those in authority Unfavorable attitude towards convention Antisocial, criminal attitudes

r (violent recidivism) 0.02, ns 0.10* 0.02, ns 0.06, ns 0.03, ns 0.06* 0.19*** 0.17*** 0.15*** 0.15*** 0.24*** 0.21***

n ≥ 612; *p < 0.05; ***p < 0.001. Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

136

G. T. Harris and M. E. Rice

Violence Risk Appraisal Guide (VRAG-R; Harris, Rice, Quinsey, & Cormier, 2015; Rice et al., 2013). Testing the VRAG-R makes the use of this mixed sample of violent offenders, only some of whom were sex offenders, easier because the VRAG-R was designed for both offender types. In logistic and linear regression analyses, VRAG-R scores yielded (unsurprisingly) the largest predictive relationship with dichotomous violent recidivism (r = 0.440, p < 0.001). Then, only a single variable in Table 1, unsatisfactory accommodation, yielded a statistically significant improvement in the prediction of violent recidivism (beta = 0.06, t = 2.36, p = 0.019, multiple R = 0.444, adjusted R-square change from 0.193 to 0.195). After the addition of this variable to VRAG-R score, no variables in Table 1 could make a statistically significant improvement. Indeed, the sum of the five best yielding a 0–5 scale made no significant incremental contribution. We suggest these results indicate it is invalid to conclude that adding apparently dynamic items, even those known to predict the outcome, improves accuracy after a comprehensive battery of static predictors are included. Readers might make several valid observations about these remarks. First, Table 1 includes both risk and protective factors. Of course, re-expressing each variable would have been straightforward: as examples, “Judged to have insight” and “Antisocial and procriminal attitudes” to “Judged to lack insight” and “Prosocial and anticriminal attitudes”, and so on. This would reverse the sign for correlations in the table, but regression results would be identical. Secondly, the assessment of the variables in the table reflected subjects’ presentation at the time of their index offenses, and violent recidivism was measured over a twodecade follow-up. It is possible that assessment at the point of release and a shorter follow-up would have yielded different results. That, we suggest, is a reasonable hypothesis, but not evidence. There is evidence that when variables reflecting clinical presentation and violence are measured more contemporaneously, some significant associations are observed (e.g., Quinsey, Coleman, Jones, & Altrows, 1997), but there are essentially no reports of larger associations than the largest in Table 1. This time span issue is complicated by the fact that long-term risk of post-release violent recidivism is usually the principal assessment question. The observation that only “unsatisfactory accommodation” made an improvement to the VRAG-R (however slight) is, we suggest, also telling. This refers to the offender’s living situation at the index offense and not post-release context. It is clear, however, that adults often choose their contexts (especially offenders with respect to post-release contexts) and that such choices can be significantly predicted from traits that can be known long before the choice is made (e.g., by measures of psychopathy; Harris, Hilton, & Rice, 2011). It is a reasonable hypothesis that knowing offenders’ post-release contexts affords incremental value (over and above a comprehensive set of valid static, historic predictors, including past accommodation) in assessing the risk of long-term violent recidivism. But an hypothesis is not evidence, and there have been very few successful demonstrations. Lastly, of course, the results above have nothing to do with change in anything. We readily concede that. Of course, that same point applies to the proliferation of studies in which putatively dynamic variables are measured only once. We agree that such research designs have little to offer to the practical and theoretical discussion pertaining to dynamic risk assessment. A possible exception to this is, we suggest, that a variable (measured once only) lacking a univariate relationship with the outcome of interest is a poor candidate for a usefully independent causal risk. It is a plausible hypothesis that, while one-time measures of insight and remorse themselves offer no incremental value Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

137

in violence risk appraisal, changes in them would. Hypothesis is not evidence, and such a finding would be rather unprecedented in this field. We propose, therefore, that the term “variable risk factors” proposed by Monahan and Skeem (2013) be reserved for factors related to recidivism and shown to be changeable by intervention. An example from our work is deviant sexual preferences (Rice, Quinsey, & Harris, 1991). In that study, we endeavored to reduce the recidivism of child molesters by changing phallometric age preferences, shown to be related to subsequent violent and sexual recidivism (Hanson & Bussière, 1998). In our (Rice et al., 1991) study, we could alter phallometric preferences during a laboratory-based treatment, thus fulfilling the second requirement for a variable risk factor. However, pretreatment scores predicted recidivism better than the post-treatment scores (for a similar example pertaining to deviant attitudes, see Helmus, Hanson, Babchishin, & Mann, 2013). We were forced to conclude that our treatment was not effective and that the study provided no evidence to support the idea that phallometric age preferences could be considered to be the last type of dynamic risk factor. Monahan and Skeem call this last type a “causal risk factor” – factors related to recidivism; changeable by intervention; and which, when changed by intervention, change the likelihood of recidivism. These are the only factors that we call “truly dynamic” or “usefully dynamic.” However, although we agree with the categorization of risk factors proposed by Monahan and Skeem (2013), we suggest as well that, in order to qualify as a causal risk factor, it must be shown that the change was caused by intervention. And showing that an intervention caused a change requires strong empirical methods (including, but not limited to, comparison participants who receive either no treatment or a credible alternative). Random-controlled trials, for example, allow a strong conclusion that the intervention caused a reduction in recidivism. Without evidence from strong methods, what appears to be a change in recidivism risk caused by an intervention could instead be a change that was completely predictable by that individual’s pre-existing scores on a tool that incorporates the best-known static predictors of violence. That is, to be a truly or usefully dynamic variable (i.e., causal risk factor), the change score on the putatively dynamic variable must add to the predictive accuracy possible using the best available purely static actuarial tool (Rice, 2008). Moreover, when offenders are measured twice, it could be expected that the addition of the second score to the first results in a more valid estimate of true risk, and that this would be the case even among offenders who did not receive the intervention. That is, if a change score on a putatively causal risk factor is positive, it may be that the offender’s first score on the risk was too low, whereas if an offender’s change score is negative, it might mean that his first score was too high. If so, it would be expected, simply on the grounds of having two measures instead of one, that the change score would add to the accuracy of the first score. Because of this possibility (i.e., regression towards the mean), we (Rice, 2008) specified that, in order to be a truly dynamic risk factor, the post-intervention scores must be better predictors than pre-intervention scores.

WHAT IS THE AVAILABLE EVIDENCE? A recent study investigated treatment in reducing the violent recidivism of 108 discharged forensic psychiatric patients over relatively short (1 year) and relatively long (11 years) follow-ups (de Vries Robbé et al., 2014). Pre-treatment scores on both the Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

138

G. T. Harris and M. E. Rice

HCR-20 and the SAPROF predicted violent recidivism, but adding the post-treatment scores in both logistic and Cox regression analyses showed that the post-treatment scores usually added incremental predictive accuracy. The amount of change between pre-treatment and post-treatment scores also predicted outcome, with receiver operating characteristic (ROC) areas ranging from 0.63 (for the HCR-20 at 1 year) to 0.78 (for the SAPROF at 1 year). The combined change scores of both tools also predicted violent recidivism at both follow-ups, but not quite as well as the SAPROF alone. The authors interpreted their findings as showing that treatment reduced risk, and that the changes on the HCR-20, the SAPROF, and the combination of the two instruments were useful predictors of future violence: non-recidivists showed the greatest improvements in risk and protective factors. Although these results suggest promise, we believe the conclusions go beyond the data. First, although there were, on average, significant improvements on both the HCR-20 and the SAPROF among the recidivists and nonrecidivists, because there was no untreated control group, one may not conclude that treatment caused improvements. It is also perhaps important to note that, although the differences were not significant, recidivists had higher scores on the HCR-20 (especially the “historic” items) and lower scores on the SAPROF than did non-recidivists. Thus, it could be that the patients with higher risk and fewer protective factors were less likely to change than others. If so, the result would be that those who changed more would be less likely to reoffend, but not due to treatment – the result could be predicted entirely from pre-treatment scores. Unfortunately, the authors do not report how well the post-treatment scores alone predicted outcome compared with the pre-treatment scores alone. The fact that post-treatment scores added incremental predictive accuracy to the pre-treatment scores could simply be due to the fact that measuring twice is more accurate than measuring once – that is, even if no treatment occurred between administrations of the tools, using the average of the two measurements (which is essentially what the authors did by adding the change score to the pre-treatment score) would yield higher predictive accuracy than one score alone. Of course, it is possible that treatment reduced risk and that changes on the measures were valid indicators of the amount of change that occurred during treatment and that improvements on the measures were valid indicators of reduced likelihood for violent recidivism following treatment. However, our point is that there is still work to be done before this hypothesis becomes established fact. Another recent example is a study by Olver et al. (2014) examining information about change among 539 sex offenders scored on the Violence Risk Scale-Sexual Offender version (VRS-SO), a tool developed for risk assessment and treatment planning. As was the case with de Vries et al. (2014), the authors reported that improved scores on the VRS-SO were significantly associated with decreases in sexual and violent recidivism, and that this was true even when controlling for indicators of pre-treatment risk (measured using either the Static-99R or the VRS-SO). The authors concluded that instruments that incorporate relevant changes are the most parsimonious, objective, and defensible approach to assessing risk. Although we hope this will eventually be demonstrated, the treatment has not been subjected to a strong test of efficacy, and the fact that change scores predicted outcome and added to the predictive accuracy of the pre-treatment scores can be explained as earlier in this paper. That is, Olver et al. did not examine how well the pre- and post-treatment scores independently predicted sexual, violent, and general recidivism. For sexual recidivism, the best predictor was the post-treatment total (i.e., sum of both static and purportedly dynamic items), which Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

139

yielded an ROC area of 0.77 (95% CI: 0.73–0.82). However, the pre-treatment total score yielded a nearly identical 0.76 (95% CI: 0.71–0.81). For violent recidivism, the pre-treatment score on the Static-99R (there were no post-treatment scores on this, as it was assumed that it could not change) yielded an ROC area of 0.74, identical to that of the post-treatment score on the VRS-SO. And for general recidivism, the Static-99R (ROC area = 0.74, 95% CI: 0.70–0.78) performed significantly better than the post-treatment score on the VRS-SO (ROC area = 0.68, 95% CI: 0.64–0.73). We suggest this means that optimal predictive accuracy of sexual, violent, and general recidivism was achieved using only pre-treatment scores with no need to posit that dynamic items8 are required or that change as a result of treatment improved accuracy. To be clear, we endorse the hypotheses of Olver et al. (2014) and de Vries et al. (2014) as reasonable (indeed, in an important sense, we hold them ourselves), but we suggest that the available evidence is insufficient to conclude that the appropriate null hypotheses should be considered to have been rejected. Thus, the conclusory advice to practitioners was, we suggest, premature.

TO SUMMARIZE, THE STATIC-DYNAMIC DISTINCTION HAS NOT YET HELPED RISK COMMUNICATION A distinction between static and dynamic risk factors has become nearly ubiquitous but has so far impeded progress in the field of violence risk assessment and communication. What is – or, better still, what should be – a useful “dynamic” (as opposed to static) risk factor? All must agree that, in principle at least, dynamic refers to change – a dynamic factor must be something that does (or can) change. That would appear to be where consensus ends and many investigators measure potential dynamic risk factors only once, leaving changeability to supposition (i.e., hypothesis). This research strategy (measure once and assert “dynamic”) cannot contribute to progress. Calling something “dynamic” because one hypothesizes it to be susceptible to change is invalidly replacing evidence with hypothesis. Obviously, changes, when actually measured, unrelated to recidivism cannot be risk factors. Thus, a study examining dynamic risks must seek evidence, for example, that those offenders who exhibit positive changes are less likely to recidivate than those who change for the worse (or do not change). But there are complications. For example, it is possible that offenders who change before release (in a way related to recidivism) are reliably different from those who do not, such that these differences could be deduced from things known before any change occurred. For example, it is likely that those high in measures of psychopathy (assessed upon admission to prison) are the least likely to change in ways related to recidivism, so that the association between changes and recidivism is subsumed by a static variable (a one-time measure of psychopathy) known beforehand. If so, the risk factor (even change) could be truly causal, but not necessarily very usefully dynamic because it conferred no predictive power in addition to established static relationships and interactions between them. In addition, however, what we most want to know is that we can cause changes to occur that then reduce recidivism. For example, it would be important to know that changes in 8 Some “dynamic factors” assessed prior to release include changes in such things as sexually deviant lifestyle (Olver, Wong, Nicholaichuk, & Gordon, 2007). How offenders make measureable changes in factors that can be inferred only from prior offenses is elusive.

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

140

G. T. Harris and M. E. Rice

skills or attitudes wrought by program participation uniquely predicted recidivism. By “uniquely” we mean that the research design permits the conclusion that pre-treatment differences (e.g., psychopathy) cannot explain all of the apparent associations between changes due to program participation and recidivism. Such research designs require the kind of methodological control typically found in randomized controlled trials. In many research designs, these changes, if demonstrated, then become statistically and analytically static – pre-release difference scores are never re-examined to test whether changes in risk factors are reliably paralleled by changes in antisocial behavior in a specific way. More valuable designs repeatedly test whether changes that occur while participants have the opportunity to offend actually anticipate expected changes in the incidence of offending. There have been no demonstrations that changes deliberately caused by supervising professionals have been followed by appropriate changes in risk.9 In addition, of course, post-release changes that occur (or not) while offenders are under community supervision cannot be used, earlier in time, in assessing risk prior to release. In this more useful sense, “dynamic” risk factors are those characteristics of the offender or environment that supervising professionals can address so as to reduce the likelihood of subsequent offenses. Despite many hypotheses along these lines, there is essentially no scientifically rigorous evidence about what these are. That is, no characteristics of persons or circumstances associated with subsequent violence have been shown to be changeable by deliberately delivered interventions in such a way that changes are paralleled by subsequent changes in violent behavior. Antisocial attitudes serve as an example here. Such attitudes (supporting criminal behavior and rejection of criminal justice policies) have been shown to be related to criminal behavior. And there is evidence that participation in particular programs induces positive changes in antisocial attitudes. However, attempts to demonstrate that the changes in such attitudes caused by treatment participation are followed by parallel changes in criminal behavior have not generally been successful (e.g., Kroner & Yessine, 2013). In that study, the central claim that antisocial attitudes are a causal risk factor (i.e., that they are a usefully dynamic risk factor and therefore an appropriate treatment target) was examined by testing the key expectation that, independent of pre-treatment variables, those who showed improved attitudes would show lower recidivism; that crucial result was not obtained. Of course, it is eminently reasonable to hypothesize that using interventions to cause improvements in traits or circumstances known (based on onetime measurements) to be associated with recidivism will thereby cause reductions in violent recidivism. But we suggest the hypothesis should not be treated as though it were evidence. We conclude that the current empirical literature on violence risk shows clearly that a relatively small number of fixed factors, when combined optimally, yield robust and reliable associations with violent recidivism. The empirical evidence about all the other three classes of risk factors enumerated by Monahan and Skeem (2013) stands in stark contrast – no one knows what changes in any factors are reliably associated with parallel changes in violent behavior. No one knows whether changes in any risk factors can be reliably achieved. And, therefore, no one knows how to produce changes in any risk factors that can be assumed to cause parallel changes in violent behavior (as another example, see Vachon, Lynam, & Johnson, 2014). 9 There is good evidence that effective training (based on RNR principles) for probation officers decreases recidivism (Robinson et al., 2012), but no information on what changes in which aspects of offenders or circumstances were responsible for the decreases.

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

141

Why does all this matter and why must those who study (and communicate about) dynamic risk be held to such tough standards, compared with those reporting static risk? The answer is that asserting something is a usefully dynamic risk factor is asserting a causal explanation of offending. Reporting that offenders’ prior (static) criminal records predict recidivism does not entail a statement that the prior records cause criminal behavior. But saying that antisocial attitudes are dynamic risk factors that interventions can change so as to reduce crime is to assert that the attitudes cause crime. And drawing conclusions about causes in the absence of clear evidence is not scientifically permitted and could cause practical harm (as in the examples of miasma and dietary fat). It is certainly acceptable, indeed essential, to hypothesize causal roles for potential usefully dynamic risk factors – hypotheses tested by rigorous evaluations. But a large gulf lies, and should lie, between hypotheses and conclusions. No research yet reported has completely bridged this gulf for dynamic risk assessment. To be clear, there is scant evidence that the addition of dynamic factors (where dynamic means that which reflects change) improves the prediction of violent recidivism over and above the exclusive use of static and historical factors. Unfortunately, in our view, some authorities have inappropriately and prematurely promulgated “risk assessment and management” schemes explicitly or implicitly asserting these causal roles and/or effectively promising reductions in risk if such things are addressed by intervention. Given the confusion and lack of consensus about all this in the empirical and professional literature, practitioners can be forgiven for mistakenly believing that usefully dynamic risk factors have already been identified according to established rules of scientific inference. Of course, practitioners need to get on with the job of treating and supervising offenders and cannot wait for all the evidence to be compiled. Thus, they must often operate (at least partly) on the basis of hypotheses. But when they do so, they should know they are doing so. Responsibility here lies with researchers not asserting as “facts” hypotheses instead of evidence.

SO WHAT SHOULD BE COMMUNICATED AND HOW? Consequently, we depart sharply from the advice of others (e.g., Monahan & Skeem, 2013) that users select an approach to violence risk assessment that depends on the purpose summarized as follows. If the purpose is merely to apportion resources, choose an assessment based on fixed risk factors. If the purpose is to manage risk (i.e., alter it), choose one that identifies causal risk factors. Then communicate to users which characteristics to change and how to do so. Simply put, there is no empirical basis (as opposed to a hypothetical basis) for the latter approach to risk assessment and communication and it cannot, therefore, be appropriately recommended by psychologists, unless its status as a set of hypotheses is more clearly articulated. What do we propose instead? Because risk assessment (as opposed to treatment and management) rests upon a much firmer empirical foundation, we advise the two processes be completely separated in assessment and communication. That is, users should select an established and well-replicated actuarial system that has the most applicability for their circumstances (e.g., violent community recidivism by offenders, domestic violence recidivism, sexually motivated recidivism by sex offenders released from secure custody, interpersonal aggression within institutions, etc.) or develop one themselves using follow-up methods. A large literature indicates that efficient Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

142

G. T. Harris and M. E. Rice

predictive accuracy could be achieved with about a dozen items. The output of such a system (i.e., what should be communicated about the individual risk of subsequent violence) should include raw score, operational definition of recidivism over what duration of opportunity, individual percentile rank, and base rate of violence in normative samples. This process is familiar to psychologists who routinely conduct standardized testing. We advise a completely separate process (distinct from actuarial violence risk assessment) for the identification of risk management and treatment targets. That is, this second process essentially comprises the articulation of hypotheses about the causes of violence and aggression. As such, assessors need familiarity with the relevant science (simplified aides memoires will not suffice). For any particular characteristic (e.g., hostile attributions) to be recommended as a target for intervention, two things must be true. First, there must be empirical evidence (as distinct from clinical lore or anecdote) that measures of the putative target are related to violence, in at least some relevant population (e.g., Chen, Coccaro, & Jacobson, 2012). And secondly, there must be evidence, using similar measure(s) that the individual assessee exhibits a problematic score (as opposed, for example, to intuitive 0, 1, 2 impressions). Even more useful communication about hypothesized risk factors would include reference to evidence that any intervention has been shown to cause changes in the characteristic in a relevant population (e.g., Daffern et al., 2013). Data on the availability or feasibility of such an intervention for the assessee and his likely persistence (assessed systematically) could also be usefully communicated, as well as some hypothesized degree of reduction in violence risk, given wholehearted participation. Finally, it is clear that repeated assessment and information about compliance with program requirements would be essential. We reiterate the basis for this recommendation about communicating violence risk: the evidence on the appraisal of violence risk and its modification are similar to that of meteorology – there is a solid basis to predict tomorrow’s weather, even if this is done imperfectly. The state of knowledge about how to alter tomorrow’s weather based on this prediction comprises, at this point, nothing more than hypotheses as to how that alteration might be accomplished. Combining in assessment or communication known predictive factors and hypothetical interventions makes no logical or empirical sense and would not be contemplated by the science of meteorology or anything else.

CONCLUSIONS There is no acceptable evidence to support (and much to contradict) the hypothesis that the alteration, adjustment, or reformulation of numerical risk scores by clinical intuition improves the accuracy of violence risk assessment. The advice to do so was the inappropriate replacement of evidence by hypothesis. There is no evidence that what have heretofore been hypothesized to be protective factors offer fundamental improvements to predictive accuracy in violence risk assessment. Despite nearly 20 years of research on putatively dynamic risk variables, there is little good evidence that any truly dynamic (or causal) risk factors have been identified in a manner that affords any guarantee about how violence can be reduced. We fear that using less than rigorous research methods in the field of violence risk assessment to replace evidence with mere hypotheses is preventing investigators from continuing to search for more accurate assessment and effective interventions. At present, in our view, we can be happy that the field of Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

143

violence risk assessment and communication has truly advanced over the past 40 years, inasmuch as there is now overwhelming evidence that actuarial and structured tools generally (when total scores are used), whether phrased as assessments of risk or protective factors, are a very significant advance.

REFERENCES Andrews, D. A., & Bonta, J. (2010). The psychology of criminal conduct. (5th edition). New York: Routledge. Belfrage, H., Strand, S., Storey, J. E., Gibas, A. L., Kropp, P., & Hart, S. D. (2012). Assessment and management of risk for intimate partner violence by police officers using the Spousal Assault Risk Assessment Guide. Law and Human Behavior, 36, 62–67. doi:10.1007/s10979-011-9278-0 Bonta, J. (2007). Offender risk assessment and sentencing. Canadian Journal of Criminology and Criminal Justice, 494–519. doi:10.3/38/cjccj Chen, P., Coccaro, E. F., & Jacobson, K. C. (2012). Hostile attributional bias, negative emotional responding, and aggression in adults: moderating effects of gender and impulsivity. Aggressive Behavior, 38, 47–63. doi:10.1002/ab.21407 Daffern, M., Thomas, S., Lee, S., Huband, N., McCarthy, L., Simpson, K., & Duggan, C. (2013). The impact of treatment on hostile-dominance in forensic psychiatric inpatients: relationships between change in hostile-dominance and recidivism following release from custody. The Journal of Forensic Psychiatry & Psychology, 24, 675–687. doi:10.1080/14789949.2013.834069 de Ruiter, C., & Nicholls, T. L. (2011). Protective factors in forensic mental health: A new frontier. International Journal of Forensic Mental Health, 10, 160–170. doi:10.1080/14999013.2011.600602 de Vogel V., & de Ruiter, C. (2006). Structured professional judgment of violence risk factor in the forensic clinical practice: A prospective study into the predictive validity of the Dutch HCR-20. Psychology Crime and Law, 12, 321–336. de Vogel, V., de Ruiter, C., Hildebrand, M., Bos, B., & van de Ven, P. (2004). Type of discharge and risk of recidivism measured by the HCR-20: A retrospective study in a Dutch sample of treated forensic psychiatric patients. International Journal of Forensic Mental Health, 3, 149–165. doi:10.1080/14999013.2004. 10471204 de Vries Robbé, M., de Vogel, V., & Douglas, K. S. (2013). Risk factors and protective factors: A two-sided dynamic approach to violence risk assessment. Journal of Forensic Psychiatry and Psychology, 24, 440–457. doi:10.1080/14789949.2013.818162 de Vries Robbé, M., de Vogel, V., Douglas, K. S., & Nijman, H. L. (2014). Changes in dynamic risk and protective factors for violence during inpatient forensic psychiatric treatment: Predicting reductions in post discharge community recidivism. Law and Human Behavior Advance online publication. doi:10.1037/ lhb0000089 de Vries Robbé, M., de Vogel, V., & de Spa, E. (2011). Protective factors for violence risk in forensic psychiatric patients: A retrospective validation study of the SAPROF. International Journal of Forensic Mental Health, 10, 178–186. doi:10.1080/14999013.2011.600232 Desmarais, S. L., Nicholls, T. L., Wilson, C. M., & Brink, J. (2012). Using dynamic risk and protective factors to predict inpatient aggression: Reliability and Validity of START assessments. Psychological Assessment, 24, 685–700. doi:10.1037/a0026668. Douglas, K. S., Hart, S. D., Groscup, J. L., & Litwack, T. R. (2014a). Assessing violence risk. In I. Weiner & th R. K. Otto (Eds.), The handbook of forensic psychology, 4 ed. (pp. 385–442). NJ: John Wiley & Sons. Douglas, K. S., Hart, S. D., Webster, C. D., Belfrage, H., Guy, L. S. & Wilson, C. M. (2014b). Historicalv3 Clinical-Risk Management-20, Version 3 (HCR-20 ): Development and overview. International Journal of Forensic Mental Health, 13, 93–108. doi:10.1080/14999013.2014.906519 Douglas, K. S., Ogloff, J. R., & Hart, S. D. (2003). Evaluation of a model of violence risk assessment among forensic psychiatric patients. Psychiatric Services, 54, 1372–1379. doi:10.1176/appi.ps.54.10.1372 Douglas, K. S., Yeomans, M., & Boer, D. P. (2005). Comparative validity analysis of multiple measures of violence risk in a sample of criminal offenders. Criminal Justice and Behavior, 32, 479–510. doi:10.1177/ 0093854805278411 Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical–statistical controversy. Psychology, Public Policy, and Law, 2, 293–323. Guy, L. S. (2008). Performance indicators of the Structured Professional Judgment approach to assessing risk for violence to others: A meta-analytic survey (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. NR58733) Hanson, R. K., & Bussière, M. T. (1998). Predicting relapse: A meta-analysis of sexual offender recidivism studies. Journal of Consulting and Clinical Psychology, 66, 348–362. doi:10.1037//0022-006X.66.2.348 Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

144

G. T. Harris and M. E. Rice

Hanson, R. K., Harris, A. J. R., Helmus, L., & Thornton, D. (2014). High-risk sex offenders may not be high risk forever. Journal of Interpersonal Violence, 29, 2792–2813. doi:10.1177/08862605/452606 Harris, G. T., Hilton, N. Z., & Rice, M. E. (2011). Explaining the frequency of intimate partner violence by male perpetrators: Do attitude, relationship, and neighborhood variables add to antisociality? Criminal Justice and Behavior, 38, 309–331. doi: 10.1177/0093854810397449 Harris, G. T., Rice, M. E., Quinsey, V. L., & Cormier, C. A. (2015). Violent offenders: Appraising and managrd ing risk (3 ed.). Washington, DC: American Psychological Association. Harris, G. T., Rice, M. E., & Quinsey, V. L. (2010). Allegiance or fidelity? A clarifying reply. Clinical Psychology: Science and Practice, 17, 82–89. doi:10.1111/j.1468-2850.2009.01197.x Heilbrun, K., Yasuhara, K., & Sanjay, S. (2010). Violence risk assessment tools: Overview and critical analysis. In R. K. Otto, & K. S. Douglas (Eds.), Handbook of violence risk assessment. International perspectives on forensic mental health (pp. 1–17). New York, NY: Routledge/Taylor & Francis Group. Helmus, L., Hanson, R. K., Babchishin, K. M., & Mann, R. E. (2013). Attitudes supportive of sexual offending predict recidivism: A meta-analysis. Trauma, Violence & Abuse, 14, 34–53. doi:10.1177/ 1524838012462244 Hilton, N. Z., Carter, A., Harris, G. T., & Sharpe, A. J. B. (2008). Does using nonumerical terms to describe risk aid violence risk communication? Clinician agreement and decision-making. Journal of Interpersonal Violence, 23, 171–188. doi:10.1177/0886260507309337 Johnson, S. B. (2006). The ghost map: The story of London’s most terrifying epidemic – and how it changed science, cities, and the modern world. London: Riverhead. Kraemer, B., Chmura, H., Kazdin, A. E., Offord, D. R., Kessler, R. C. Jensen, P., & Kupfer, D.J. (1997). Coming to terms with the terms of risk. Archives of General Psychiatry, 54, 337–343. doi:10.1001/ archpsych.1997.01830160065009 Krauss, D. A., & Scurich, N. (2013). Risk assessment in the law: Legal admissibility, scientific validity, and some disparities between research and practice. Behavioral Sciences & the Law, 31, 215–229. doi:10.1002/ bsl.2065 Kroner, D. G., & Yessine, A. K. (2013). Changing risk factors that impact recidivism: in search of mechanisms of change. Law and Human Behavior, 37, 321–336. doi:10.10.7/lhb0000022 Kropp, P. R. & Hart, S. D. (2000). The Spousal Assault Risk assessment (SARA) Guide: Reliability and validity in adult male offenders. Law and Human Behavior, 24, 101–118. doi:10.1023/A:1005430904495 Meehl, P. E. (1954). Clinical vs. statistical prediction. Minneapolis: University of Minnesota Press. Michel, S. F., Riaz, M., Webster, C., Hart, S. D., Levander, S., Müller-Isberner, R., & Hodgins, S. (2013). Using the HCR-20 to predict aggressive behavior among men with schizophrenia living in the community: Accuracy of prediction, general and forensic settings, and dynamic risk factors. International Journal of Forensic Mental Health, 12, 1–13. doi:10.1080/14999013.2012.760182 Monahan, J. (1981). Predicting violent behavior: An assessment of clinical techniques. Beverly Hills, CA: Sage. Monahan, J., & Skeem, J. L. (2013). Risk redux: The resurgence of risk assessment in criminal sanctioning Public Law and Legal Theory Research Paper Series, 2013–36. University of Virginia School of: Law. Electronic copy available at: http://ssrn.com/abstract=2332165 Olver, M. E., Beggs Christofferson, S. M., Grace, R. C., & Wong, S. C (2014). Incorporating change information into sexual offender risk assessments using the Violence Risk Scale-Sexual Offender Version. Sexual Abuse: A Journal of Research and Treatment, 26, 472–499. doi:10.1177/079063213502679 Olver, M. E., Wong, S. C., Nicholaichuk, T., & Gordon, A. (2007). The validity and reliability of the Violence Risk Scale-Sexual Offender version: assessing sex offender risk and evaluating therapeutic change. Psychological Assessment, 19, 318–329. doi:10.1037/1040-3590.19.3.318 O’Shea, L. E., Mitchell, A. E., Picchioni, M. M., & Dickens, G. L. (2013). Moderators of the predictive efficacy of the Historical, Clinical and Risk Management-20 for aggression in psychiatric facilities: Systematic review and meta-analysis. Aggression and Violent Behavior, 18, 255–270. doi:10.1016/j.avb.2012.11.016 Quinsey, V. L. (1975). Psychiatric staff conferences of dangerous mentally disordered offenders. Canadian Journal of Behavioural Science, 7, 60–69. doi:10.1037/h0081896 Quinsey, V. L. (1984). Politique institutionelle de liberation: identification des individus dangereux: Une revue de la literature [Institutional release policy and the identification of dangerous men: A review of the literature]. Criminologie, 17,53–78. (English version available). doi:10.7202/017199ar Quinsey, V. L., & Ambtman, R. (1979). Variables affecting psychiatrists’ and teachers’ assessments of the dangerousness of mentally ill offenders. Journal of Consulting and Clinical Psychology, 47, 353–362. Quinsey, V. L., Coleman, G., Jones, B., & Altrows, I. (1997). Proximal antecedents of eloping and reoffending among supervised mentally disordered offenders. Journal of Interpersonal Violence, 12, 794–813. doi:10.1177/088626097012006002 Quinsey, V. L., Pruesse, M., & Fernley, R. (1975a). Oak Ridge patients: Prerelease characteristics and postrelease adjustment. Journal of Psychiatry and Law, 3, 63–77. Quinsey, V. L., Warneford, A., Pruesse, M., & Link, N. (1975b). Released Oak Ridge patients: A follow-up of review board discharges. British Journal of Criminology, 15, 264–270. Rice, M. E. (2008). Current status of violence risk assessment: Is there a role for clinical judgment? In G. Bourgon, R. K. Hanson, J. D. Pozzulo, K. E. Morton Bourgon, & C. L. Tanasichuk (Ed.), Proceedings of Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Progress in risk communication

145

the North American Correctional and Criminal Justice Psychology Conference. Public Safety Canada User Report 2008–02. Public Safety Canada: Ottawa, ON. Rice, M. E., & Harris, G. T. (2014). What does it mean when age is related to recidivism among sex offenders? Law and Human Behavior, 38, 151–161. doi:10.1037/lhb0000052 Rice, M. E., Harris, G. T., & Lang, C. (2013). Validation of and revision to the VRAG and SORAG: The Violence Risk Appraisal Guide – Revised (VRAG-R). Psychological Assessment, 25, 951–965. doi:10.1037/ a0032878 Rice, M. E., Quinsey, V. L., & Harris, G. T. (1991). Sexual recidivism among child molesters released from a maximum security psychiatric institution. Journal of Consulting and Clinical Psychology, 59, 381–386. doi:10.1037//0022-006X.59.3.381 Robinson, C. R., Lowenkamp, C. T., Holsinger, A. M., VanBenschoten, S. W., Alexander, M., & Oleson, J. C. (2012). A random study of staff training aimed at reducing re-arrest (STARR): Using core correctional practices in probation interactions. Journal of Crime and Justice, 35, 167–188. doi:10.1080/ 0735648X.2012.674823 Rutter, M. (1987). Psychosocial resilience and protective mechanisms. American Journal of Orthopsychiatry, 57, 316. doi:10.1111/j.1939-0025.1987.tb03541.x Scurich, N., & John, R. S. (2012). A Bayesian approach to the group versus individual prediction controversy in actuarial risk assessment. Law and Human Behavior, 36, 237. doi:10.1037/h0093973 Storey, J. E., Kropp, R. P., Hart, S. D., Belfrage, H., & Strand, S. (2013). Assessment and management of risk for intimate partner violence by police officers using the Brief Spousal Assault form for the evaluation of risk. Criminal Justice and Behavior, 41, 256–271. doi:10.1177/0093854813503960 Sutherland, A. A., Johnstone, L., Davidson, K. M., Hart, S. D., Cooke, D. J., Kropp, P. R., & Stocks, R. (2012). Sexual violence risk assessment: An investigation of the interrater reliability of professional judgments made using the Risk for Sexual Violence Protocol. International Journal of Forensic Mental Health, 11, 119–133. doi:10.1080/14999013.2012.690020 Taubes, G. (2007). Good calories, bad calories: Fats, carbs, and the controversial science of diet and health. New York: Anchor Books. Tharp, A. T., DeGue, S., Valle, L. A., Brookmeyer, K. A., Massetti, G. M., & Matjasko, J. L. (2012). A systematic qualitative review of risk and protective factors for sexual violence perpetration. Trauma, Violence and Abuse, 14, 133–167. doi:10.1177/152438012470031 Vachon, D. D., Lynam, D. R., & Johnson, J. A. (2014). The (non)relation between empathy and aggression: Surprising results from a meta-analysis. Psychological Bulletin, 140, 751–773. doi:10.1037/a0035236

Copyright # 2015 John Wiley & Sons, Ltd.

Behav. Sci. Law 33: 128–145 (2015) DOI: 10.1002/bsl

Copyright of Behavioral Sciences & the Law is the property of John Wiley & Sons, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Progress in violence risk assessment and communication: hypothesis versus evidence.

We draw a distinction between hypothesis and evidence with respect to the assessment and communication of the risk of violent recidivism. We suggest t...
139KB Sizes 1 Downloads 9 Views