Our past Technical Chair, discussed Evan Estola’s upcoming talk: When Recommendations Systems Go Bad at MLconf Seattle, scheduled for May 20th.
Will machine learning fix machine learning? (I mean the ethical side)
EE) I don’t think that machine learning alone will help us make more ethical algorithms. From the most basic view, how can an algorithm ever know that using a particular feature is unethical without a human saying so? I’m definitely excited about progress that can be made in this area, and certainly there are tools to be developed that will help us make better decisions, but in the end ethics is a human decision.
Ethics is something cultural. I mean unethical actions in some cultures are ethical in others. What is the culture of an algorithm? Is it the culture of the author? Most of the recent fallouts did not really reflect programmer’s view, they acted spontaneously. Are the algorithms allowed to have their own culture?
EE) Algorithms are a reflection of the people that use them. As machine learners, we hold the keys to a part of the business that is rarely well understood by leadership. If there is a risk that your algorithm is doing something unethical or even illegal, you have an obligation to let people know. If your organizational values have not been defined, you should make sure to define them before you launch a model that could compromise them.
Expert psychologists often resort to manipulative techniques in order to target vulnerable social groups such as kids in order to push them to spend more. Yes, I am a parent and I have felt that a lot of ads try to do that. Why is it fair for a group of humans to do this and not for a group of algorithms?
EE) Advertising to children is unethical and should be illegal! But this is just an opinion and not expert judgement. We know that targeting a product at a particular group of people is a key area of study in marketing and it has been used to great effect. We also know that discrimination exists. We know that social injustice exists. As computers are trained to make more and more decisions for us, we can influence whether they should encapsulate the bias that exists in our society, or if they should be better.
When does statistical inference become unethical?
EE) Statistical inference is just a tool, it is how we use a tool that makes it good or bad. Statistics will tell us that women make less than men, if we use this to infer that we should pay women less, we are in the wrong. If we use that information to stand up and say “This is wrong and we should fix it”, then the math has done good.
Companies are required, by the law, to show that they take preventative measures in regards to ethics and compliance. Could the FairTest algorithm presented in this MLconf be the new ethics regulation for high Tech? Or maybe sentiment analysis monitoring is a better way?
EE) I love the FairTest algorithm because it helps us solve the difficult problem of identifying when we have a feature that is a proxy for a feature we know we shouldn’t be using. Name can be accidentally used as a proxy for gender or race, zip code a proxy for race or income. This is a difficult and worthy problem of attacking.
Critically the user must still determine the features that aren’t allowed. We still have to have that difficult conversation about our values, our ethics, and what we will do about it. In terms of enforcing compliance, it is great that we have a tool that will show that an algorithm is biased against a particular group. Now let’s make sure everyone knows about it and what they can do to reverse it.

Evan Estola, Lead Machine Learning Engineer, Meetup
Evan is a Lead Machine Learning Engineer working on the Data Team at Meetup. Combining product design, machine learning research and software engineering, Evan builds systems that help Meetup’s members find the best thing in the world: real local community. Before Meetup, Evan worked on hotel recommendations at Orbitz Worldwide, and he began his career in the Information Retrieval Lab at the Illinois Institute of Technology.