GLOBAL: Algorithms are becoming ever larger actors in human interaction and decision-making, and calls have come from the public and lawmakers alike, but one study suggests that transparency can cause more problems if not implemented carefully.

This is according to a paper in the Harvard Business Review by Professor Kartik Hosanagar and Vivian Jair, both of the University of Pennsylvania. The writers examine the use of variously transparent algorithms in academic settings.

In 2014, a professor received backlash when he explained to students whose papers had been marked by different TAs, how he would be deploying an algorithm to correct the bias. He was flooded with complaints.

In a 2016 study, the Stanford PhD student René Kizilcec (and a member of that 2014 class) decided to look at the effects of grading transparency on student trust by studying the massive open online course (MOOC) platform Coursera, which uses peer-grading to mark high volumes of exam entries.

Kizilcec’s study examined how 103 students submitted essays and got back two marks: an average peer grade and a ‘computed’ grade that adjusted for bias. Some were just told their grade, while others were provided with greater transparency and an explanation of the calculation.

He found that when grade expectations were violated (lower than expected), students showed more trust in the algorithm when they had received an explanation. However, when students saw both an explanation and their raw data, trust levels were equal to or lower than among students experiencing low transparency.

Revealing algorithms has multiple problems. Among them, the fact that they constitute valuable company IP, the public’s relatively low mathematical ability, and the potential for an algorithm to be ‘gamed’ if revealed.

Caution, the authors write, is advised: “Users will not trust black box models, but they don’t need – or even want – extremely high levels of transparency.

“Instead, [companies] should work to provide basic insights on the factors driving algorithmic decisions.” In one of the less reported parts of the EU’s GDPR, citizens are able to demand their right to explanation. What is far more complex is working out how to present this both to the public and to reguators.

How to prepare for a more regulated algorithmic world? “It is worth remembering that building trust in machine learning and analytics will require a system of relationships, where regulators, for example, get high levels of transparency, and users accept medium levels.”

Sourced from Harvard Business Review; additional content by WARC staff