Synopsis
Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.
Episodes
-
Blind Spots in Reinforcement Learning
29/06/2018 Duration: 27minAn intelligent agent trained in a simulated environment may be prone to making mistakes in the real world due to discrepancies between the training and real-world conditions. The areas where an agent makes mistakes are hard to find, known as "blind spots," and can stem from various reasons. In this week’s episode, Kyle is joined by Ramya Ramakrishnan, a PhD candidate at MIT, to discuss the idea “blind spots” in reinforcement learning and approaches to discover them.
-
Defending Against Adversarial Attacks
22/06/2018 Duration: 31minIn this week’s episode, our host Kyle interviews Gokula Krishnan from ETH Zurich, about his recent contributions to defenses against adversarial attacks. The discussion centers around his latest paper, titled “Defending Against Adversarial Attacks by Leveraging an Entire GAN,” and his proposed algorithm, aptly named ‘Cowboy.’
-
Transfer Learning
15/06/2018 Duration: 18minOn a long car ride, Linhda and Kyle record a short episode. This discussion is about transfer learning, a technique using in machine learning to leverage training from one domain to have a head start learning in another domain. Transfer learning has some obvious appealing features. Take the example of an image recognition problem. There are now many widely available models that do general image recognition. Detecting that an image contains a "sofa" is an impressive feat. However, for a furniture company interested in more specific details, this classifier is absurdly general. Should the furniture company build a massive corpus of tagged photos, effectively starting from scratch? Or is there a way they can transfer the learnings from the general task to the specific one. A general definition of transfer learning in machine learning is the use of taking some or all aspects of a pre-trained model as the basis to begin training a new model which a specific and potentially limited dataset.
-
Medical Imaging Training Techniques
08/06/2018 Duration: 25minMedical imaging is a highly effective tool used by clinicians to diagnose a wide array of diseases and injuries. However, it often requires exceptionally trained specialists such as radiologists to interpret accurately. In this episode of Data Skeptic, our host Kyle Polich is joined by Gabriel Maicas, a PhD candidate at the University of Adelaide, to discuss machine learning systems that can be used by radiologists to improve their accuracy and speed of diagnosis.
-
Kalman Filters
01/06/2018 Duration: 21minThanks to our sponsor Galvanize A Kalman Filter is a technique for taking a sequence of observations about an object or variable and determining the most likely current state of that object. In this episode, we discuss it in the context of tracking our lilac crowned amazon parrot Yoshi. Kalman filters have many applications but the one of particular interest under our current theme of artificial intelligence is to efficiently update one's beliefs in light of new information. The Kalman filter is based upon the Gaussian distribution. This distribution is described by two parameters: (the mean) and standard deviation. The procedure for updating these values in light of new information has a closed form. This means that it can be described with straightforward formulae and computed very efficiently. You may gain a greater appreciation for Kalman filters by considering what would happen if you could not rely on the Gaussian distribution to describe your posterior beliefs. If determining the probability distribut
-
AI in Industry
25/05/2018 Duration: 43minThere's so much to discuss on the AI side, it's hard to know where to begin. Luckily, Steve Guggenheimer, Microsoft’s corporate vice president of AI Business, and Carlos Pessoa, a software engineering manager for the company’s Cloud AI Platform, talked to Kyle about announcements related to AI in industry.
-
AI in Games
18/05/2018 Duration: 25minToday's interview is with the authors of the textbook Artificial Intelligence and Games.
-
Game Theory
11/05/2018 Duration: 24minThanks to our sponsor The Great Courses. This week's episode is a short primer on game theory. For tickets to the free Data Skeptic meetup in Chicago on Tuesday, May 15 at the Mendoza College of Business (224 South Michigan Avenue, Suite 350), click here,
-
The Experimental Design of Paranormal Claims
04/05/2018 Duration: 27minIn this episode of Data Skeptic, Kyle chats with Jerry Schwarz from the Independent Investigations Group (IIG)'s SF Bay Area chapter about testing claims of the paranormal. The IIG is a volunteer-based organization dedicated to investigating paranormal or extraordinary claim from a scientific viewpoint. The group, headquartered at the Center for Inquiry-Los Angeles in Hollywood, offers a $100,000 prize to anyone who can show, under proper observing conditions, evidence of any paranormal, supernatural, or occult power or event. CHICAGO Tues, May 15, 6pm. Come to our Data Skeptic meetup. CHICAGO Saturday, May 19, 10am. Kyle will be giving a talk at the Chicago AI, Data Science, and Blockchain Conference 2018.
-
Winograd Schema Challenge
27/04/2018 Duration: 36minOur guest this week, Hector Levesque, joins us to discuss an alternative way to measure a machine’s intelligence, called Winograd Schemas Challenge. The challenge was proposed as a possible alternative to the Turing test during the 2011 AAAI Spring Symposium. The challenge involves a small reading comprehension test about common sense knowledge.
-
The Imitation Game
20/04/2018 Duration: 01h58sThis week on Data Skeptic, we begin with a skit to introduce the topic of this show: The Imitation Game. We open with a scene in the distant future. The year is 2027, and a company called Shamony is announcing their new product, Ada, the most advanced artificial intelligence agent. To prove its superiority, the lead scientist announces that it will use the Turing Test that Alan Turing proposed in 1950. During this we introduce Turing’s “objections” outlined in his famous paper, “Computing Machinery and Intelligence.” Following that, we talk with improv coach Holly Laurent on the art of improvisation and Peter Clark from the Allen Institute for Artificial Intelligence about question and answering algorithms.
-
Eugene Goostman
13/04/2018 Duration: 17minIn this episode, Kyle shares his perspective on the chatbot Eugene Goostman which (some claim) "passed" the Turing Test. As a second topic Kyle also does an intro of the Winograd Schema Challenge.
-
The Theory of Formal Languages
06/04/2018 Duration: 23minIn this episode, Kyle and Linhda discuss the theory of formal languages. Any language can (theoretically) be a formal language. The requirement is that the language can be rigorously described as a set of strings which are considered part of the language. Those strings are any combination of alphabet characters in the given language. Read more
-
The Loebner Prize
30/03/2018 Duration: 33minThe Loebner Prize is a competition in the spirit of the Turing Test. Participants are welcome to submit conversational agent software to be judged by a panel of humans. This episode includes interviews with Charlie Maloney, a judge in the Loebner Prize, and Bruce Wilcox, a winner of the Loebner Prize.
-
The Master Algorithm
16/03/2018 Duration: 46minIn this week’s episode, Kyle Polich interviews Pedro Domingos about his book, The Master Algorithm: How the quest for the ultimate learning machine will remake our world. In the book, Domingos describes what machine learning is doing for humanity, how it works and what it could do in the future. He also hints at the possibility of an ultimate learning algorithm, in which the machine uses it will be able to derive all knowledge — past, present, and future.
-
The No Free Lunch Theorems
09/03/2018 Duration: 27minWhat's the best machine learning algorithm to use? I hear that XGBoost wins most of the Kaggle competitions that aren't won with deep learning. Should I just use XGBoost all the time? That might work out most of the time in practice, but a proof exists which tells us that there cannot be one true algorithm to rule them.
-
ML at Sloan Kettering Cancer Center
02/03/2018 Duration: 38minFor a long time, physicians have recognized that the tools they have aren't powerful enough to treat complex diseases, like cancer. In addition to data science and models, clinicians also needed actual products — tools that physicians and researchers can draw upon to answer questions they regularly confront, such as “what clinical trials are available for this patient that I'm seeing right now?” In this episode, our host Kyle interviews guests Alex Grigorenko and Iker Huerga from Memorial Sloan Kettering Cancer Center to talk about how data and technology can be used to prevent, control and ultimately cure cancer.
-
Optimal Decision Making with POMDPs
23/02/2018 Duration: 18minIn a previous episode, we discussed Markov Decision Processes or MDPs, a framework for decision making and planning. This episode explores the generalization Partially Observable MDPs (POMDPs) which are an incredibly general framework that describes most every agent based system.
-
AI Decision-Making
16/02/2018 Duration: 42minMaking a decision is a complex task. Today's guest Dongho Kim discusses how he and his team at Prowler has been building a platform that will be accessible by way of APIs and a set of pre-made scripts for autonomous decision making based on probabilistic modeling, reinforcement learning, and game theory. The aim is so that an AI system could make decisions just as good as humans can.