Um yeah… so cough a few months ago, I got this link for an online test… from maybe from this bbs (heck it may have even been from you–I honestly can’t remember right now, it may well have come from a friend working in Austin on this matter)… and the first thing I noticed is that this implicit bias test is being offered by Harvard…
https://implicit.harvard.edu/implicit/takeatest.html
Before taking it, I dug a bit deeper because I was curious about methodology, and went to the Project Implicit research group’s page before taking the test:
https://www.projectimplicit.net/about.html
Project Implicit is a non-profit organization and international collaboration between researchers who are interested in implicit social cognition - thoughts and feelings outside of conscious awareness and control. The goal of the organization is to educate the public about hidden biases and to provide a “virtual laboratory” for collecting data on the Internet.
Project Implicit was founded in 1998 by three scientists – Tony Greenwald (University of Washington), Mahzarin Banaji (Harvard University), and Brian Nosek (University of Virginia). Project Implicit Mental Health launched in 2011, led by Bethany Teachman (University of Virginia) and Matt Nock (Harvard University). Project Implicit also provides consulting services, lectures, and workshops on implicit bias, diversity and inclusion, leadership, applying science to practice, and innovation. If you are interested in finding out more about these services, visit https://www.projectimplicit.net/organization.html.
Hmm.
Pushing aside my own assumptions about the makers of it, I took the test in good faith, and even as a woman of color it has been and is still abundantly clear I have plenty of uh room for improvement. Oh dear. ![]()
There’s a big ol’ intersectionality right here, not the least of which is who owns and what happens to the Big Data that gets generated taking an online test by Harvard, or any of the million data points generated largely secretively by our dang phones. Fractal genies out of fractally increasing numbers of AI bottles and not a single bottlecap worthy of the name in sight.
A friend who works in infosec told me ~10 years ago that Facebook was a perfect turnkey operation for all things surveillance. He added smartphones to his opinion a few years after. Crimes like Cambridge Analytica’s etc. have only ossified his position.
I sense now that the de facto training our smartphones have been providing our upcoming generations consistently warps and ossifies user behavior [as described in the OP]. IMO we have scarcely begun to quantify this formally.
Complaints characterizing Them Dang Youngsters With Their Dang Screen Devices as clueless / unwilling / rude for not knowing how to conduct healthy human-to-human face-to-face relationships is but the mere tip o’ the iceberg and will take generations to undo.
Reclaiming the best parts of our humanity–all humans–will be the work of lifetimes.
I look at wee kids in grocery carts, pacified by screens whilst their parents are shopping, and I see the warping, the addiction, and implicit cultural programming going in far far earlier than my infosec friend had ever predicted.
Sheesh I think I am going outside to walk the dog now. ![]()
Maybe I’ll flip a coin to decide whether I should take my smartphone.