Maybe I’m assuming too much about the cop data, but I don’t see how it is vaguely comparable to the IAT. The IAT is a thing I’m doing out of interest at my computer and I think the results might tell me something about myself. The test done with these cops was part of work and was meant to tell the world something about cops. If my employer asked me to take the IA, or if I had to agree to have my name and result published before I took it, I’m pretty sure that would mess with the results.
Yes, or that there are other things I don’t know that would have me revise what I think. As is always the case when I talk about anything because the world is complex and I don’t know everything.
The article this thread is discussing covers more than just the IAT that you voluntarily do at your computer to find out more about yourself. There’s a shoot/no shoot study discussed, done with cops drawn from the Denver police department.
Was the Washington study done by cops as part of their work, with them being told to do so by their employer? I haven’t seen anything about voluntary or mandatory participation in that study, or what the participants were told ahead of time.
The Denver study specifies the selection process in the paper about it (which we can read for ourselves), so we shouldn’t need to rely on assumptions here.
I took the IAT. I found myself performing faster on the portion where they associated European American with “Good”. When I went through the first round, associating European American with Bad and African American with Good, I found myself fighting to reconcile the two and even categorizing a couple wrong. By the time I got to the next round, I’d gotten used to the concept.
Honestly, I felt that the IAT was geared toward reaching that goal. Maybe I’m completely off base, and the answer is that I was raised in a rural, almost entirely white environment, but I know from experience that it’s possible to create surveys that get the “right” results.
Well sorry for any confusion then. I don’t know what differences about the methodology of different shoot/no shoot studies would lead to one being more accurate than the other. I think that, as a baseline, if you perform a shoot/no shoot test on a cop you should assume that they are behaving more as they would if they knew they were being recorded/watched than as they would if they thought they were not being recorded/watched.
I meant “told to by my employer” to be just an example of a situation that would give me the “being watched” feeling. I’m assuming that most cops would rather have people have a positive perception of cops, and that they would be sensitive to any test about something like whether or not to shoot a person. They would know that the results might end up saying, “Hey, cops sure like to shoot people,” and most are probably aware that any such test might end up saying, “Hey, cops sure like to shoot black people especially.” Merely knowing that they are feeding data into a system that may be used to undermine the credibility of cops should make them more cautious.
You can assume they know they are being watched, yes. It’s a jump to assume they know why they are being watched, or to assume what they will assume is the point of the test.
In this case, there’s two different studies that appear to show different results, but are at least somewhat comparable in methodology. I’m more inclined to believe the one we have more information about (the Denver one), but it’s thin reasoning to declare that the Washington one was mislead due to police knowing they were being watched when we don’t know any reason why they would be impacted by this more than the Denver police (who, according to the published test protocols, also knew they were being watched).
That’s why I said that there’s got to be some other factor at work. If what you’re assuming is true, it should be true in both cases (barring additional factors or details we don’t have).
Cops use force less and get less complaints about their use of force when they have cameras on. If cops behave differently while being recorded than when they are not recorded, we should be skeptical of how behaviour in a lab converts to behaviour on the field - particularly when lab behaviour is better than real-life data about the field. I’m sorry if I haven’t been clear about this, I know you assumed that I didn’t apply this skepticism equally to two different studies. My only reason to think the Denver study was done well is my confidence in @xeni who curated this post for us. I think I have pretty good reason to think that the Washington study linked by @jrusssh_md might not have been done well.
One thing that I think we should be able to empirically agree on is that you and I simply shouldn’t interact with one another.
I’m re-quoting this entire thing because you’re not saying anything here that I haven’t agreed about previously. Honestly. We’re in complete agreement in almost this entire statement (my only wobble would be that I have different reasons to trust the Denver study more… sorry, @xeni ).
I really don’t even think we’re exactly in opposition here, and I’m sorry we keep ending up at loggerheads. There’s some conclusions you’re making that I don’t feel I can leap to based on the information available, even though I’m still leaning in the same general direction. I can live with that if you can.
Take three groups. Bribe one group to get reward on bias, bribe another to get reward on non-bias, third is the control. The result should be the margin of how much the test can be gamed.