Machine Learning Meets Psychological Science

david_myers
Author
Author
0 0 6,892

At long last, artificial intelligence (AI)—and its main subset, machine learning—is beginning to fulfill its promise. When fed massive amounts of data, computers can discern patterns (as in speech recognition) and make predictions or decisions. AlphaZero, a Google-related computer system, started playing chess, shogi (Japanese chess), and GO against itself. Before long, thanks to machine learning, AlphaZero progressed from no knowledge of each game to “the best player, human or computer, the world has ever seen.”

349647_AI.jpg

DrAfter123/DigitalVision Vectors/Getty Images

I’ve had recent opportunities to witness the growing excitement about machine learning in the human future, through conversations with

  • Adrian Weller (a Cambridge University scholar who is program director for the UK’s national institute for data science and AI).
  • Andrew Briggs (Oxford’s Professor of Nanomaterials, who is using machine learning to direct his quantum computing experiments and, like Weller, is pondering what machine learning portends for human flourishing).
  • Brian Odegaard (a UCLA post-doc psychologist who uses machine learning to identify brain networks that underlie human consciousness and perception).

Two new medical ventures (to which—full disclosure—my family foundation has given investment support) illustrate machine learning’s potential:

  • Fifth Eye, a University of Michigan spinoff, has had computers mine data on millions of heartbeats from critically ill hospital patients—to identify invisible, nuanced signs of deterioration. By detecting patterns that predict patient crashes, the system aims to provide a potentially life-saving early warning system (well ahead of doctors or nurses detecting anything amiss).
  • Delphinus, which offers a new ultrasound alternative to mammography, will similarly use machine learning from thousands of breast scans to help radiologists spot potent cancer cells.

Other machine-learning diagnostic systems are helping physicians to identify strokes, retinal pathology, and (using sensors and language predictors) the risk of depression or suicide. Machine learning of locked-in ALS patients’ brain wave patterns associated with “Yes” and “No” answers has enabled them to communicate their thoughts and feelings. And it is enabling researchers to translate brain activity into speech.

 

Consider, too, a new Pew Research Center study of gender representation in Google images. Pew researchers first harvested an archive of 26,981 gender-labeled human faces from different countries and ethnic groups. They fed 80 percent of these images into a computer, which used machine learning to discriminate male and female faces. When tested on the other 20 percent, the system achieved 95 percent accuracy.

 

Pew researchers next had the system use its new human-like gender-discrimination ability to  identify the gender of persons shown in 10,000 Google images associated with 105 common occupations. Would the gender representation in the image search results overrepresent, underrepresent, or accurately represent their proportions, as reported by U.S. Bureau of Labor Statistics (BLS) data summaries?

 

The result? Women, relative to their presence in the working world, were significantly underrepresented in some categories and overrepresented in others. For example, the BLS reports that 57 percent of bartenders are female—as are only 29 percent of the first 100 people shown in Google image searches of “bartender” (as you can see for yourself). Searches for “medical records technician,” “probation officer,” “general manager,” “chief executive,” and “security guard” showed a similar underrepresentation. But women were overrepresented, relative to their working proportion, in Google images for “police,” “computer programmer,” “mechanic,” and “singer.” Across all 105 jobs, men are 54 percent of those employed and 60 percent of those pictured. The bottom line: Machine learning reveals (in Google users’ engagement) a subtle new form of gender bias.

 

As these examples illustrate, machine learning holds promise for helpful application and research. But it will also entail some difficult ethical questions.

 

Imagine, for example, that age, race, gender, or sexual orientation are incorporated into algorithms that predict recidivism among released prisoners. Would it be discriminatory, or ethical, to use such demographic predictors in making parole decisions?

 

Such questions already exist in human judgments, but may become more acute if and when we ask machines to make these decisions. Or is there reason to hope that it will be easier to examine and tweak the inner workings of an algorithmic system than to do so with a human mind?

 

(For David Myers’ other essays on psychological science and everyday life visit www.TalkPsych.com.)

Tags (2)
About the Author
David Myers has spent his entire teaching career at Hope College, Michigan, where he has been voted “outstanding professor” and has been selected by students to deliver the commencement address. His award-winning research and writings have appeared in over three dozen scientific periodicals and numerous publications for the general public. He also has authored five general audience books, including The Pursuit of Happiness and Intuition: Its Powers and Perils. David Myers has chaired his city's Human Relations Commission, helped found a thriving assistance center for families in poverty, and spoken to hundreds of college and community groups. Drawing on his experience, he also has written articles and a book (A Quiet World) about hearing loss, and he is advocating a transformation in American assistive listening technology (see www.hearingloop.org).