Data Skeptic

  • Author: Vários
  • Narrator: Vários
  • Publisher: Podcast
  • Duration: 298:52:45
  • More information

Informações:

Synopsis

Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.

Episodes

  • [MINI] One Shot Learning

    22/09/2017 Duration: 17min

    One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples.  This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model. In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each.  Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data?  We discuss some of the reasons why and approaches to One Shot Learning.

  • Recommender Systems Live from FARCON 2017

    15/09/2017 Duration: 46min

    Recommender systems play an important role in providing personalized content to online users. Yet, typical data mining techniques are not well suited for the unique challenges that recommender systems face. In this episode, host Kyle Polich joins Dr. Joseph Konstan from the University of Minnesota at a live recording at FARCON 2017 in Minneapolis to discuss recommender systems and how machine learning can create better user experiences. 

  • [MINI] Long Short Term Memory

    08/09/2017 Duration: 15min

    Thanks to our sponsor brilliant.org/dataskeptics A Long Short Term Memory (LSTM) is a neural unit, often used in Recurrent Neural Network (RNN) which attempts to provide the network the capacity to store information for longer periods of time. An LSTM unit remembers values for either long or short time periods. The key to this ability is that it uses no activation function within its recurrent components. Thus, the stored value is not iteratively modified and the gradient does not tend to vanish when trained with backpropagation through time.

  • Zillow Zestimate

    01/09/2017 Duration: 37min

    Zillow is a leading real estate information and home-related marketplace. We interviewed Andrew Martin, a data science Research Manager at Zillow, to learn more about how Zillow uses data science and big data to make real estate predictions.

  • Cardiologist Level Arrhythmia Detection with CNNs

    25/08/2017 Duration: 32min

    Our guest Pranav Rajpurkar and his coauthored recently published Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, a paper in which they demonstrate the use of Convolutional Neural Networks which outperform board certified cardiologists in detecting a wide range of heart arrhythmias from ECG data.

  • [MINI] Recurrent Neural Networks

    18/08/2017 Duration: 17min

    RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely on the state at a previous time, thus giving the network a form of memory.  RNNs have been used effectively in language analysis, translation, speech recognition, and many other tasks.

  • Project Common Voice

    11/08/2017 Duration: 31min

    Thanks to our sponsor Springboard. In this week's episode, guest Andre Natal from Mozilla joins our host, Kyle Polich, to discuss a couple exciting new developments in open source speech recognition systems, which include Project Common Voice. In June 2017, Mozilla launched a new open source project, Common Voice, a novel complementary project to the TensorFlow-based DeepSpeech implementation. DeepSpeech is a deep learning-based voice recognition system that was designed by Baidu, which they describe in greater detail in their research paper. DeepSpeech is a speech-to-text engine, and Mozilla hopes that, in the future, they can use Common Voice data to train their DeepSpeech engine.

  • [MINI] Bayesian Belief Networks

    04/08/2017 Duration: 17min

    A Bayesian Belief Network is an acyclic directed graph composed of nodes that represent random variables and edges that imply a conditional dependence between them. It's an intuitive way of encoding your statistical knowledge about a system and is efficient to propagate belief updates throughout the network when new information is added.

  • pix2code

    28/07/2017 Duration: 26min

    In this episode, Tony Beltramelli of UIzard Technologies joins our host, Kyle Polich, to talk about the ideas behind his latest app that can transform graphic design into functioning code, as well as his previous work on spying with wearables.

  • [MINI] Conditional Independence

    21/07/2017 Duration: 14min

    In statistics, two random variables might depend on one another (for example, interest rates and new home purchases). We call this conditional dependence. An important related concept exists called conditional independence. This phrase describes situations in which two variables are independent of one another given some other variable. For example, the probability that a vendor will pay their bill on time could depend on many factors such as the company's market cap. Thus, a statistical analysis would reveal many relationships between observable details about the company and their propensity for paying on time. However, if you know that the company has filed for bankruptcy, then we might assume their chances of paying on time have dropped to near 0, and the result is now independent of all other factors in light of this new information. We discuss a few real world analogies to this idea in the context of some chance meetings on our recent trip to New York City.

  • Estimating Sheep Pain with Facial Recognition

    14/07/2017 Duration: 27min

    Animals can't tell us when they're experiencing pain, so we have to rely on other cues to help treat their discomfort. But it is often difficult to tell how much an animal is suffering. The sheep, for instance, is the most inscrutable of animals. However, scientists have figured out a way to understand sheep facial expressions using artificial intelligence. On this week's episode, Dr. Marwa Mahmoud from the University of Cambridge joins us to discuss her recent study, "Estimating Sheep Pain Level Using Facial Action Unit Detection." Marwa and her colleague's at Cambridge's Computer Laboratory developed an automated system using machine learning algorithms to detect and assess when a sheep is in pain. We discuss some details of her work, how she became interested in studying sheep facial expression to measure pain, and her future goals for this project. If you're able to be in Minneapolis, MN on August 23rd or 24th, consider attending Farcon. Get your tickets today via https://farcon2017.eventbrite.com.

  • CosmosDB

    07/07/2017 Duration: 33min

    This episode collects interviews from my recent trip to Microsoft Build where I had the opportunity to speak with Dharma Shukla and Syam Nair about the recently announced CosmosDB. CosmosDB is a globally consistent, distributed datastore that supports all the popular persistent storage formats (relational, key/value pair, document database, and graph) under a single streamlined API. The system provides tunable consistency, allowing the user to make choices about how consistency trade-offs are managed under the hood, if a consumer wants to go beyond the selected defaults.

  • [MINI] The Vanishing Gradient

    30/06/2017 Duration: 15min

    This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn.

  • Doctor AI

    23/06/2017 Duration: 41min

    hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team’s efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data.

  • [MINI] Activation Functions

    16/06/2017 Duration: 14min

    In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced. Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation. In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.

  • MS Build 2017

    09/06/2017 Duration: 27min

    This episode recaps the Microsoft Build Conference.  Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence.  The episode includes interviews with Rohan Kumar and David Carmona.  

  • [MINI] Max-pooling

    02/06/2017 Duration: 12min

    Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.

  • Unsupervised Depth Perception

    26/05/2017 Duration: 23min

    This episode is an interview with Tinghui Zhou.  In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its applications.

  • [MINI] Convolutional Neural Networks

    19/05/2017 Duration: 14min

    CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it.  In this episode, we discuss a few high-level details of this important architecture.

  • Multi-Agent Diverse Generative Adversarial Networks

    12/05/2017 Duration: 29min

    Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety. To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.

page 21 from 29