How do you fight an algorithm you cannot see?

https://techcrunch.com/2019/01/15/how-do-you-fight-an-algorithm-you-cannot-see/


That question in the headline was the challenge posed by a group of open knowledge junkies in Germany who wanted to understand how a person’s Schufa was calculated. Schufa is a credit bureau that generates financial scores for potential borrowers in Germany, and it is roughly equivalent to a FICO score in the United States. Schufa is not an open algorithm, and so important financial decisions are mediated by an unknown process that can be quite capricious in its scoring.

So the activists created a platform called OpenSchufa that would attempt to discover the details of this algorithm. Under German law, citizens have the right to request their financial data from companies like Schufa, and so a movement was created to get as many citizens to request their data from the company as possible and then have them donate the data they receive to the project.

Since its launch, several thousand people have donated their scores, and the activists have learned that the algorithm can be quite “error-prone” – creating relatively negative scores without any negative evidence. The release of these results have propelled regulators to argue for more transparency around credit scores in Germany, and has also led Schufa to start offering their disclosures in a digital format, rather than by paper.

These sorts of crowdsourced algorithmic accountability exercises are not unique to Germany, or to deep learning processes. In the United States, there was a bit of a movement for a time around getting access to college admissions data. Under the FERPA law, students who matriculate at a university have the right to their data, including their admissions file. There was an attempt (albeit mostly unsuccessful) to try to collate a large number of these files and figure out how admissions offices made decisions.

I love both of these examples, because I love the idea that we can take our own democratic action to make the world a bit less complicated. Alas, it is not that simple.

One of the biggest challenges today for machine learning is what is known as the “black box problem.” Software engineers can test algorithms to see if their output matches the expectations of a test set, but we have no insight into how the algorithm actually arrived at its final decision. We know that a loan application is denied, but we don’t if it was because of a history of unpaid bills or because the applicant has red hair. Researchers, such as Been Kim at Google Brain, have studied how to open up that black box through the use of a “translator,” but such work remains preliminary.

Algorithms are proprietary though, and monopolistic within their context (a customer can’t select the algorithm they want to use to assess their credit, for instance). Without data, and without publishing the algorithm, it’s extremely difficult to understand how it is making a decision. And in the case of deep learning, it’s basically impossible to understand how it is making a decision even if you do have the data and the algorithm.

That has led to a growing movement of theorists concerned about algorithmic accountability, of ensuring that we both understand how an algorithm makes a decision, and that the decision-making is legally non-discriminatory. Social theorists like Frank Pasquale have warned that we are creating a “black box society” in which key moments of our lives are mediated by unknown, unseen, and arbitrary algorithms. Algorithmic accountability is designed to stop that pattern.

This is a real problem, without easy solutions. I have been riffing on this idea of using technology to increase societal resilience, but this is a good example of how hard that can be. Clearly making algorithms simpler for humans to understand and building trust in these digital decision-makers is good for society, but we have no easy pathways to that outcome.

Consider that an open challenge for startups and entrepreneurs to try to solve.

TechCrunch is experimenting with new content forms. This is a rough draft of something new — provide your feedback directly to the author (Danny at danny@techcrunch.com) if you like or hate something here.
Share your feedback on your startup’s attorney

My colleague Eric Eldon and I are reaching out to startup founders and execs about their experiences with their attorneys. Our goal is to identify the leading lights of the industry and help spark discussions around best practices. If you have an attorney you thought did a fantastic job for your startup, let us know using this short Google Forms survey and also spread the word. We will share the results and more in the coming weeks.
Stray Thoughts (aka, what I am reading)

Short summaries and analysis of important news stories
More on societal resilience

Thanks for the many letters of feedback on my piece last week on societal resilience. Many interesting comments, but there were a few that I thought were interesting.

A reader named Andrew wrote: “I would highlight the reality that the bottom of this bottoms-up solution is self. We are all a startups of one and we choose what measures to gauge ourselves against. Whether it’s our personal GDP (income) or other factors, our personal startup must look inward to determine what’s important in the development of self.”

A reader named Cordula wrote: “I’ve been looking at regenerative design as proposed by Daniel Christian Wahl and others, and the examples you’ve cited fit well into that framework.” Regenerative design is an interesting field I had never heard of, which basically argues that systems should use their energies not only for output, but also to repair and heal themselves.

That’s going to be critical, because climate change appears to be accelerating even faster than predicted. A new report in the leading journal Science found that oceans are warming faster than models predicted.

No comments:

Post a Comment