Synopsis
Data Skeptic is a data science podcast exploring machine learning, statistics, artificial intelligence, and other data topics through short tutorials and interviews with domain experts.
Episodes
-
Facebook Bargaining Bots Invented a Language
21/06/2019 Duration: 23minIn 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.
-
Under Resourced Languages
15/06/2019 Duration: 16minPriyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English. Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models. For languages that researchers have not paid as much attention to, these tools are not always available.
-
Named Entity Recognition
08/06/2019 Duration: 17minKyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER. NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).
-
The Death of a Language
01/06/2019 Duration: 20minUSC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project.
-
Neural Turing Machines
25/05/2019 Duration: 25minKyle and Linh Da discuss the concepts behind the neural Turing machine.
-
Data Infrastructure in the Cloud
18/05/2019 Duration: 30minKyle chats with Rohan Kumar about hyperscale, data at the edge, and a variety of other trends in data engineering in the cloud.
-
NCAA Predictions on Spark
11/05/2019 Duration: 23minIn this episode, Kyle interviews Laura Edell at MS Build 2019. The conversation covers a number of topics, notably her NCAA Final 4 prediction model.
-
The Transformer
03/05/2019 Duration: 15minKyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.
-
Mapping Dialects with Twitter Data
26/04/2019 Duration: 25minWhen users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location. In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.
-
Sentiment Analysis
20/04/2019 Duration: 27minThis is an interview with Ellen Loeshelle, Director of Product Management at Clarabridge. We primarily discuss sentiment analysis.
-
Attention Primer
13/04/2019 Duration: 14minA gentle introduction to the very high-level idea of "attention" in machine learning, as it will play a major role in some upcoming episodes over the next few weeks.
-
Cross-lingual Short-text Matching
05/04/2019 Duration: 24minModern messaging technology has facilitated a trend towards highly compact, short messages send by users who can presume a great amount of context held between the communicating parties. The rules of grammar may be discarded and often visible errors are a normal part of the conversation. >>> Good mornink >>> morning Yet such short messages are also important for businesses whose users are unlikely to read a large block of text upon completing an order. Similarly, a business might want to offer assistance and effective question and answering solutions in an automated and ideally multi-lingual way. In this episode, we discuss techniques for designing solutions like that.
-
ELMo
29/03/2019 Duration: 23minELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition.
-
BLEU
23/03/2019 Duration: 42minBilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
-
Simultaneous Translation at Baidu
15/03/2019 Duration: 24minWhile at NeurIPS 2018, Kyle chatted with Liang Huang about his work with Baidu research on simultaneous translation, which was demoed at the conference.
-
Human vs Machine Transcription
08/03/2019 Duration: 32minMachine transcription (the process of translating audio recordings of language to text) has come a long way in recent years. But how do the errors made during machine transcription compare to the errors made by a human transcriber? Find out in this episode!
-
seq2seq
01/03/2019 Duration: 21minA sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder. The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings. In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning. Related Links tf-seq2seq Describing Multimedia Content using Attention-based Encoder--Decoder Networks Show and Tell: A Neural Image Caption Generator Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
-
Text Mining in R
22/02/2019 Duration: 20minKyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and professional. Related Links https://stack-survey-2018.glitch.me/ https://stackoverflow.blog/2017/03/28/realistic-developer-fiction/
-
Recurrent Relational Networks
15/02/2019 Duration: 19minOne of the most challenging NLP tasks is natural language understanding and reasoning. How can we construct algorithms that are able to achieve human level understanding of text and be able to answer general questions about it? This is truly an open problem, and one with the bAbI dataset has been constructed to facilitate. bAbI presents a variety of different language understanding and reasoning tasks and exists as benchmark for comparing approaches. In this episode, Kyle talks to Rasmus Berg Palm about his recent paper Recurrent Relational Networks
-
Text World and Word Embedding Lower Bounds
08/02/2019 Duration: 39minIn the first half of this episode, Kyle speaks with Marc-Alexandre Côté and Wendy Tay about Text World. Text World is an engine that simulates text adventure games. Developers are encouraged to try out their reinforcement learning skills building agents that can programmatically interact with the generated text adventure games. In the second half of this episode, Kyle interviews Kevin Patel about his paper Towards Lower Bounds on Number of Dimensions for Word Embeddings. In this research, the explore an important question of how many hidden nodes to use when creating a word embedding.