Data Skeptic

  • Author: Vários
  • Narrator: Vários
  • Publisher: Podcast
  • Duration: 299:48:45
  • More information

Informações:

Synopsis

Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.

Episodes

  • Interpretable One Shot Learning

    26/01/2020 Duration: 30min

    We welcome Su Wang back to Data Skeptic to discuss the paper Distributional modeling on a diet: One-shot word learning from text only.

  • Fooling Computer Vision

    22/01/2020 Duration: 25min

    Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person.  Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable.

  • Algorithmic Fairness

    14/01/2020 Duration: 42min

    This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

  • Interpretability

    07/01/2020 Duration: 32min

    Interpretability Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask. Welcome to Data Skeptic Interpretability. In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning. Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic. Music Our new theme song is #5 by Big D and the Kids Table. Incidental music by Tanuki Suit Riot.

  • NLP in 2019

    31/12/2019 Duration: 38min

    A year in recap.

  • The Limits of NLP

    24/12/2019 Duration: 29min

    We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".

  • Jumpstart Your ML Project

    15/12/2019 Duration: 20min

    Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts.

  • Serverless NLP Model Training

    10/12/2019 Duration: 29min

    Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline.  The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.

  • Team Data Science Process

    03/12/2019 Duration: 41min

    Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.  

  • Ancient Text Restoration

    01/12/2019 Duration: 41min

    Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.

  • ML Ops

    27/11/2019 Duration: 36min

    Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

  • Annotator Bias

    23/11/2019 Duration: 25min

    The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required for effective training.  The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand.  Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the p

  • NLP for Developers

    20/11/2019 Duration: 29min

    While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.

  • Indigenous American Language Research

    13/11/2019 Duration: 22min

    Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

  • Talking to GPT-2

    31/10/2019 Duration: 29min

    GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI?  Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2.  We discuss his experiences as well as some novel thoughts on artificial intelligence.

  • Reproducing Deep Learning Models

    23/10/2019 Duration: 22min

    Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

  • What BERT is Not

    14/10/2019 Duration: 27min

    Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

  • SpanBERT

    08/10/2019 Duration: 24min

    Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". https://arxiv.org/abs/1907.10529

  • BERT is Shallow

    23/09/2019 Duration: 20min

    Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.

  • BERT is Magic

    16/09/2019 Duration: 18min

    Kyle pontificates on how impressed he is with BERT.

page 15 from 29