Synopsis
Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.
Episodes
-
-
Drug Discovery with Machine Learning
21/12/2018 Duration: 28minIn today's episode, Kyle chats with Alexander Zhebrak, CTO of Insilico Medicine, Inc. Insilico self describes as artificial intelligence for drug discovery, biomarker development, and aging research. The conversation in this episode explores the ways in which machine learning, in particular, deep learning, is contributing to the advancement of drug discovery. This happens not just through research but also through software development. Insilico works on data pipelines and tools like MOSES, a benchmarking platform to support research on machine learning for drug discovery. The MOSES platform provides a standardized benchmarking dataset, a set of open-sourced models with unified implementation, and metrics to evaluate and assess their performance.
-
Sign Language Recognition
14/12/2018 Duration: 19minAt the NeurIPS 2018 conference, Stradigi AI premiered a training game which helps players learn American Sign Language. This episode brings the first of many interviews conducted at NeurIPS 2018. In this episode, Kyle interviews Chief Data Scientist Carolina Bessega about the deep learning architecture used in this project. The Stradigi AI team was exhibiting a project called the American Sign Language (ASL) Alphabet Game at the recent NeurIPS 2018 conference. They also published a detailed blog post about how they built the system found here.
-
Data Ethics
07/12/2018 Duration: 19minThis week, Kyle interviews Scott Nestler on the topic of Data Ethics. Today, no ubiquitous, formal ethical protocol exists for data science, although some have been proposed. One example is the INFORMS Ethics Guidelines. Guidelines like this are rather informal compared to other professions, like the Hippocratic Oath. Yet not every profession requires such a formal commitment. In this episode, Scott shares his perspective on a variety of ethical questions specific to data and analytics.
-
Escaping the Rabbit Hole
30/11/2018 Duration: 33minKyle interviews Mick West, author of Escaping the Rabbit Hole: How to Debunk Conspiracy Theories Using Facts, Logic, and Respect about the nature of conspiracy theories, the people that believe them, and how to help people escape the belief in false information. Mick is also the creator of metabunk.org. The discussion explores conspiracies like chemtrails, 9/11 conspiracy theories, JFK assassination theories, and the flat Earth theory. We live in a complex world in which no person can have a sufficient understanding of all topics. It's only natural that some percentage of people will eventually adopt fringe beliefs. In this book, Mick provides a fantastic guide to helping individuals who have fallen into a rabbit hole of pseudo-science or fake news.
-
Theorem Provers
23/11/2018 Duration: 18minFake news attempts to lead readers/listeners/viewers to conclusions that are not descriptions of reality. They do this most often by presenting false premises, but sometimes by presenting flawed logic. An argument is only sound and valid if the conclusions are drawn directly from all the state premises, and if there exists a path of logical reasoning leading from those premises to the conclusion. While creating a theorem does feel to most mathematicians as a creative act of discovery, some theorems have been proven using nothing more than search. All the "rules" of logic (like modus ponens) can be encoded into a computer program. That program can start from the premises, applying various combinations of rules to inference new information, and check to see if the program has inference the desired conclusion or its negation. This does seem like a mechanical process when painted in this light. However, several challenges exist preventing any theorem prover from instantly solving all the open problems in mat
-
Automated Fact Checking
16/11/2018 Duration: 31minFake news can be responded to with fact-checking. However, it's easier to create fake news than the fact check it. Full Fact is the UK's independent fact-checking organization. In this episode, Kyle interviews Mevan Babakar, head of automated fact-checking at Full Fact. Our discussion talks about the process and challenges in doing fact-checking. Full Fact has been exploring ways in which machine learning can assist in automating parts of the fact-checking process. Progress in areas like this allows journalists to be more effective and rapid in responding to new information.
-
Single Source of Truth
09/11/2018 Duration: 29minIn mathematics, truth is universal. In data, truth lies in the where clause of the query. As large organizations have grown to rely on their data more significantly for decision making, a common problem is not being able to agree on what the data is. As the volume and velocity of data grow, challenges emerge in answering questions with precision. A simple question like "what was the revenue yesterday" could become mired in details. Did your query account for transactions that haven't been finalized? If I query again later, should I exclude orders that have been returned since the last query? What time zone should I use? The list goes on and on. In any large enough organization, you are also likely to find multiple copies if the same data. Independent systems might record the same information with slight variance. Sometimes systems will import data from other systems; a process which could become out of sync for several reasons. For any sufficiently large system, answering analytical questions with pre
-
Detecting Fast Radio Bursts with Deep Learning
02/11/2018 Duration: 44minFast radio bursts are an astrophysical phenomenon first observed in 2007. While many observations have been made, science has yet to explain the mechanism for these events. This has led some to ask: could it be a form of extra-terrestrial communication? Probably not. Kyle asks Gerry Zhang who works at the Berkeley SETI Research Center about this possibility and more importantly, about his applications of deep learning to detect fast radio bursts. Radio astronomy captures observations from space which can be converted to a waterfall chart or spectrogram. These data structures can be formatted in a visual way and also make great candidates for applying deep learning to the task of detecting the fast radio bursts.
-
Being Bayesian
26/10/2018 Duration: 24minThis episode explores the root concept of what it is to be Bayesian: describing knowledge of a system probabilistically, having an appropriate prior probability, know how to weigh new evidence, and following Bayes's rule to compute the revised distribution. We present this concept in a few different contexts but primarily focus on how our bird Yoshi sends signals about her food preferences. Like many animals, Yoshi is a complex creature whose preferences cannot easily be summarized by a straightforward utility function the way they might in a textbook reinforcement learning problem. Her preferences are sequential, conditional, and evolving. We may not always know what our bird is thinking, but we have some good indicators that give us clues.
-
Modeling Fake News
19/10/2018 Duration: 33minThis is our interview with Dorje Brody about his recent paper with David Meier, How to model fake news. This paper uses the tools of communication theory and a sub-topic called filtering theory to describe the mathematical basis for an information channel which can contain fake news. Thanks to our sponsor Gartner.
-
The Louvain Method for Community Detection
12/10/2018 Duration: 26minWithout getting into definitions, we have an intuitive sense of what a "community" is. The Louvain Method for Community Detection is one of the best known mathematical techniques designed to detect communities. This method requires typical graph data in which people are nodes and edges are their connections. It's easy to imagine this data in the context of Facebook or LinkedIn but the technique applies just as well to any other dataset like cellular phone calling records or pen-pals. The Louvain Method provides a means of measuring the strength of any proposed community based on a concept known as Modularity. Modularity is a value in the range that measure the density of links internal to a community against links external to the community. The quite palatable assumption here is that a genuine community would have members that are strongly interconnected. A community is not necessarily the same thing as a clique; it is not required that all community members know each other. Rather, we simply define a commun
-
Cultural Cognition of Scientific Consensus
05/10/2018 Duration: 31minIn this episode, our guest is Dan Kahan about his research into how people consume and interpret science news. In an era of fake news, motivated reasoning, and alternative facts, important questions need to be asked about how people understand new information. Dan is a member of the Cultural Cognition Project at Yale University, a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. In a paper titled Cultural cognition of scientific consensus, Dan and co-authors Hank Jenkins‐Smith and Donald Braman discuss the "cultural cognition of risk" and establish experimentally that individuals tend to update their beliefs about scientific information through a context of their pre-existing cultural beliefs. In this way, topics such as climate change, nuclear power, and conceal-carry handgun permits often result in people. The findings of this and other studies tell us that on topics such as these, even when people are given proper information about a sci
-
False Discovery Rates
28/09/2018 Duration: 25minA false discovery rate (FDR) is a methodology that can be useful when struggling with the problem of multiple comparisons. In any experiment, if the experimenter checks more than one dependent variable, then they are making multiple comparisons. Naturally, if you make enough comparisons, you will eventually find some correlation. Classically, people applied the Bonferroni Correction. In essence, this procedure dictates that you should lower your p-value (raise your standard of evidence) by a specific amount depending on the number of variables you're considering. While effective, this methodology is strict about preventing false positives (type i errors). You aren't likely to find evidence for a hypothesis that is actually false using Bonferroni. However, your exuberance to avoid type i errors may have introduced some type ii errors. There could be some hypotheses that are actually true, which you did not notice. This episode covers an alternative known as false discovery rates. The essence of this method is
-
Deep Fakes
21/09/2018 Duration: 30minDigital videos can be described as sequences of still images and associated audio. Audio is easy to fake. What about video? A video can easily be broken down into a sequence of still images replayed rapidly in sequence. In this context, videos are simply very high dimensional sequences of observations, ripe for input into a machine learning algorithm. The availability of commodity hardware, clever algorithms, and well-designed software to implement those algorithms at scale make it possible to do machine learning on video, but to what end? There are many answers, one interesting approach being the technology called "DeepFakes". The Deep of Deepfakes refers to Deep Learning, and the fake refers to the function of the software - to take a real video of a human being and digitally alter their face to match someone else's face. Here are two examples: Barack Obama via Jordan Peele The versatility of Nick Cage This software produces curiously convincing fake videos. Yet, there's something slightly off about t
-
Fake News Midterm
14/09/2018 Duration: 19minIn this episode, Kyle reviews what we've learned so far in our series on Fake News and talks briefly about where we're going next.
-
Quality Score
07/09/2018 Duration: 18minTwo weeks ago we discussed click through rates or CTRs and their usefulness and limits as a metric. Today, we discuss a related metric known as quality score. While that phrase has probably been used to mean dozens of different things in different contexts, our discussion focuses around the idea of quality score encountered in Search Engine Marketing (SEM). SEM is the practice of purchasing keyword targeted ads shown to customers using a search engine. Most SEM is managed via an auction mechanism - the advertiser states the price they are willing to pay, and in real time, the search engine will serve users advertisements and charge the advertiser. But how to search engines decide who to show and what price to charge? This is a complicated question requiring a multi-part answer to address completely. In this episode, we focus on one part of that equation, which is the quality score the search engine assigns to the ad in context. This quality score is calculated via several factors including crawling the destin
-
The Knowledge Illusion
31/08/2018 Duration: 40minKyle interviews Steven Sloman, Professor in the school of Cognitive, Linguistic, and Psychological Sciences at Brown University. Steven is co-author of The Knowledge Illusion: Why We Never Think Alone and Causal Models: How People Think about the World and Its Alternatives. Steven shares his perspective and research into how people process information and what this teaches us about the existence of and belief in fake news.
-
Click Through Rates
24/08/2018 Duration: 31minA Click Through Rate (CTR) is the proportion of clicks to impressions of some item of content shared online. This terminology is most commonly used in digital advertising but applies just as well to content websites might choose to feature on their homepage or in search results. A CTR is intuitively appealing as a metric for optimization. After all, if users are disinterested in some content, under normal circumstances, it's reasonable to assume they would ignore the content, rather than clicking on it. On the other hand, the best content is likely to elicit a high CTR as users signal their interest by following the hyperlink. In the advertising world, a website could charge per impression, per click, or per action. Both impression and action based pricing have asymmetrical results for the publisher and advertiser. However, paying per click (CPC based advertising) seems to strike a nice balance. For this and other numeric reasons, many digital advertising mechanisms (such as Google Adwords) use CPC as the pay
-
Algorithmic Detection of Fake News
17/08/2018 Duration: 46minThe scale and frequency with which information can be distributed on social media makes the problem of fake news a rapidly metastasizing issue. To do any content filtering or labeling demands an algorithmic solution. In today's episode, Kyle interviews Kai Shu and Mike Tamir about their independent work exploring the use of machine learning to detect fake news. Kai Shu and his co-authors published Fake News Detection on Social Media: A Data Mining Perspective, a research paper which both surveys the existing literature and organizes the structure of the problem in a robust way. Mike Tamir led the development of fakerfact.org, a website and Chrome/Firefox plugin which leverages machine learning to try and predict the category of a previously unseen web page, with categories like opinion, wiki, and fake news.