Data Skeptic

  • Author: Vários
  • Narrator: Vários
  • Publisher: Podcast
  • Duration: 292:14:46
  • More information

Informações:

Synopsis

Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.

Episodes

  • Fraud Detection in Real Time

    18/08/2020 Duration: 38min

    In this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case.  He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective.

  • Listener Survey Review

    11/08/2020 Duration: 23min

    In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results.

  • Human Computer Interaction and Online Privacy

    27/07/2020 Duration: 32min

    Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.

  • Authorship Attribution of Lennon McCartney Songs

    20/07/2020 Duration: 33min

    Mark Glickman joins us to discuss the paper Data in the Life: Authorship Attribution in Lennon-McCartney Songs.

  • GANs Can Be Interpretable

    11/07/2020 Duration: 26min

    Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.

  • Sentiment Preserving Fake Reviews

    06/07/2020 Duration: 28min

    David Ifeoluwa Adelani joins us to discuss Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection.

  • Interpretability Practitioners

    26/06/2020 Duration: 32min

    Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

  • Facial Recognition Auditing

    19/06/2020 Duration: 47min

    Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

  • Robust Fit to Nature

    12/06/2020 Duration: 38min

    Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.

  • Black Boxes Are Not Required

    05/06/2020 Duration: 32min

    Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition

  • Robustness to Unforeseen Adversarial Attacks

    30/05/2020 Duration: 21min

    Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.

  • Estimating the Size of Language Acquisition

    22/05/2020 Duration: 25min

    Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition

  • Interpretable AI in Healthcare

    15/05/2020 Duration: 35min

    Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.

  • Understanding Neural Networks

    08/05/2020 Duration: 34min

    What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.

  • Self-Explaining AI

    02/05/2020 Duration: 32min

    Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user. We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.

  • Plastic Bag Bans

    24/04/2020 Duration: 34min

    Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

  • Self Driving Cars and Pedestrians

    18/04/2020 Duration: 30min

    We are joined by Arash Kalatian to discuss Decoding pedestrian and automated vehicle interactions using immersive virtual reality and interpretable deep learning.

  • Computer Vision is Not Perfect

    10/04/2020 Duration: 26min

    Computer Vision is not Perfect Julia Evans joins us help answer the question why do neural networks think a panda is a vulture. Kyle talks to Julia about her hands-on work fooling neural networks. Julia runs Wizard Zines which publishes works such as Your Linux Toolbox. You can find her on Twitter @b0rk

  • Uncertainty Representations

    04/04/2020 Duration: 39min

    Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates. Homepage: http://users.eecs.northwestern.edu/~jhullman/ Lab: MU Collective

  • AlphaGo, COVID-19 Contact Tracing and New Data Set

    28/03/2020 Duration: 33min

    Announcing Journal Club I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to. In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article. We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do. In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that w

page 13 from 29