Synopsis
Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.
Episodes
-
Retraction Watch
05/10/2020 Duration: 32minIvan Oransky joins us to discuss his work documenting the scientific peer-review process at retractionwatch.com.
-
Crowdsourced Expertise
21/09/2020 Duration: 27minDerek Lim joins us to discuss the paper Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform.
-
The Spread of Misinformation Online
14/09/2020 Duration: 35minNeil Johnson joins us to discuss the paper The online competition between pro- and anti-vaccination views.
-
Consensus Voting
07/09/2020 Duration: 22minMashbat Suzuki joins us to discuss the paper How Many Freemasons Are There? The Consensus Voting Mechanism in Metric Spaces. Check out Mashbat’s and many other great talks at the 13th Symposium on Algorithmic Game Theory (SAGT 2020)
-
Voting Mechanisms
31/08/2020 Duration: 27minSteven Heilman joins us to discuss his paper Designing Stable Elections. For a general interest article, see: https://theconversation.com/the-electoral-college-is-surprisingly-vulnerable-to-popular-vote-changes-141104 Steven Heilman receives funding from the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
-
False Consensus
24/08/2020 Duration: 33minSami Yousif joins us to discuss the paper The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. This work empirically explores how individuals evaluate consensus under different experimental conditions reviewing online news articles. More from Sami at samiyousif.org Link to survey mentioned by Daniel Kerrigan: https://forms.gle/TCdGem3WTUYEP31B8
-
Fraud Detection in Real Time
18/08/2020 Duration: 38minIn this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case. He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective.
-
Listener Survey Review
11/08/2020 Duration: 23minIn this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results.
-
Human Computer Interaction and Online Privacy
27/07/2020 Duration: 32minMoses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.
-
Authorship Attribution of Lennon McCartney Songs
20/07/2020 Duration: 33minMark Glickman joins us to discuss the paper Data in the Life: Authorship Attribution in Lennon-McCartney Songs.
-
GANs Can Be Interpretable
11/07/2020 Duration: 26minErik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.
-
Sentiment Preserving Fake Reviews
06/07/2020 Duration: 28minDavid Ifeoluwa Adelani joins us to discuss Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection.
-
Interpretability Practitioners
26/06/2020 Duration: 32minSungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.
-
Facial Recognition Auditing
19/06/2020 Duration: 47minDeb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.
-
Robust Fit to Nature
12/06/2020 Duration: 38minUri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.
-
Black Boxes Are Not Required
05/06/2020 Duration: 32minDeep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition
-
Robustness to Unforeseen Adversarial Attacks
30/05/2020 Duration: 21minDaniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.
-
Estimating the Size of Language Acquisition
22/05/2020 Duration: 25minFrank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition
-
Interpretable AI in Healthcare
15/05/2020 Duration: 35minJayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.
-
Understanding Neural Networks
08/05/2020 Duration: 34minWhat does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.