- shima__shima
- 3908
- 5
- 0
- 0
awesome talk by @morgangames and @roeldobbe about the complications of using tech to incorporate ethics with VSD; but could be a”fairly structured first step” to a difficult question #FAT2019 pic.twitter.com/E6EHxtfSvL
2019-01-30 03:33:59One of the early take-aways reiterated a few times by @kdphd and @scheidegger: preprocessing matters! Even here we face trade-off implications for fairness and accuracy. #FAT2019 pic.twitter.com/yrKqY5j7qZ
2019-01-30 03:40:12Award annoucement #FAT2019! 🏅Best Technical + Interdisciplinary Paper🏅 Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. (@benzevgreen & Yiling Chen) Presented/livestreamed at Jan 31, 3:20PM Paper: doi.org/gftmqh pic.twitter.com/Tk3FdVjrKc
2019-01-30 03:40:15Award annoucement #FAT2019! 🏅Best Technical Paper🏅 Controlling Polarization in Personalization: An Algorithmic Framework (@profelisacelis, @sayashk, F Salehi & @NisheethVishnoi) Catch it presented at ~3:40, 30 Jan. Paper: doi.org/gftmpz pic.twitter.com/BC8mKizEjB
2019-01-30 03:47:10.@Reubenpoet @dgrobinson tell the actual human stories of how #riskassessments ignore people's due process, and workshop with data scientists how to oversee it in concert with the community #FAT2019 pic.twitter.com/soFzrxUZLU
2019-01-30 03:48:38Last Award announcement #FAT2019 🏅 Best Non Archival Submission🏅 Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People (@oziadias & @m_sendhil) Presented/livestreamed ~3:30 Jan 31 Extended Abstract: doi.org/gftmqj pic.twitter.com/1jJgfdlh06
2019-01-30 03:52:07Tutorial taking lessons from the history of the field of fairness and testing over the 50s to 80s. #FAT2019 @benhutchinson @mmitchell_ai @shiraamitchell pic.twitter.com/yNK8IZk8bH
2019-01-30 04:06:01Our @fatconference tutorial on AI Fairness 360 is underway to a packed room with @krvarshney @nrkarthikeyan @rkeb1 @IBMResearch #FAT2019 pic.twitter.com/ApuDYVYRFZ
2019-01-30 05:22:51A term I hadn't heard before: "wound collecting." Refers to a collection of grievances over time. Hate groups try to take advantage of this phenomenon on online platforms to spread their ideologies & build hate that leads to violent action. #fat2019
2019-01-30 05:30:04At the end of the @splcenter talk: they recommend checking out changetheterms.org, which works to gather insights on how hate operates on online platforms & creates corporate policies for social media platforms. #fat2019
2019-01-30 05:32:06They mentioned checking out @ProPublica's work on documenting hate & hate crimes in the US. They've got a lot of solid articles and data here: projects.propublica.org/graphics/hatec… #FAT2019
2019-01-30 05:36:14Interesting Q on the causation between hate speech online and hate crimes irl. A: "There's a hole in the research showing how activity online leads to activity in the real world." Hankes lets us know that this type of research needs to be done. #FAT2019
2019-01-30 05:40:07Really interesting insight from Hankes: "The more violent spaces online don't often churn out the most violent people in the world" (which might be opposed to what we expect). Cites how the content Roof saw was "mild" compared to some of the content out there. #FAT2019
2019-01-30 05:47:03Switching gears to the tutorial on measuring unintended bias in text classification, looking at comments and labeling their toxicity and identity references. Slides and code here: github.com/conversationai… #FAT2019
2019-01-30 05:54:54“Quantitative definitions of fairness are means, not ends.” Interesting talk with Margaret Mitchell and Ben Hutchinson on parallels between test and ML fairness. #FAT2019
2019-01-30 05:57:12This is your annual reminder that real human people with rich lived experiences are affected by criminal risk scoring. #FAT2019 pic.twitter.com/Q17WZ3LqOF
2019-01-30 05:59:20If you have ML code to release but are concerned about potential uses beyond the scope of your work. Think about releasing your code under the Responsible AI Licenses (licenses.ai )! #FAT2019 #AAAI #AAAI2019
2019-01-30 06:01:22Cynthia Conti-Cook, @datasociety fellow, presenting her work on algorithms & risks & incarceration at #FAT2019 pic.twitter.com/4g9tVTglxn
2019-01-30 06:02:08Glad to be participating in #FAT2019 in Atlanta GA discussing #race and #MachineLearning Wednesday with coauthor Sebastian Benthall pic.twitter.com/kv9p3X9yXG
2019-01-30 06:05:08We — modelers, lawyers, policy advocates — need to distinguish between bias and fairness. They are not interchangeable. “Bias is a feature of statistical models. Fairness is a feature of human value judgements” @undersequoias #FAT2019
2019-01-30 06:09:26“The blue represents me.” Our data is not nuetral. #FAT2019 pic.twitter.com/5KsBm5e8yo
2019-01-30 06:10:53Conti-Cook and Rodriguez: the COMPAS system for automatic risk prediction in criminals has led to unfair denial of parole for prisoners, sometimes overriding more relevant reasons for assessing parole #FAT2019 pic.twitter.com/LcZwt3uyp2
2019-01-30 06:16:23“Just say no” to ambiguous questions and other failures of survey design for COMPAS risk assessment, says Cynthia Conti-Cook of @LegalAidNYC & @datasociety, presenting with Glenn Rodriguez #FAT2019 pic.twitter.com/Q6fttOXz2F
2019-01-30 06:47:41Just starting: the #FAT2019 tutorial on the challenges of incorporating fairness in practice pic.twitter.com/VlhFEnCBET
2019-01-30 07:04:49