Marilyn Walker, is a Professor of Computer Science at UC Santa Cruz,and a fellow of the Association for Computational Linguistics (ACL), in recognition of her for fundamental contributions to statistical methods for dialog optimization, to centering theory, and to expressive generation for dialog. Her H-index a measure of research impact is 55. Her current research includes work on computational models of dialogue interaction and conversational agents, analysis of affect, sarcasm and other social phenomena in social media dialogue, acquiring causal knowledge from text, conversational summarization, interactive story and narrative generation, and statistical methods for training the dialogue manager and the language generation engine for dialogue systems. Before coming to Santa Cruz in 2009, Walker was a professor of computer science at the University of Sheffield. From 1996 to 2003, she was a principal member of the research staff at AT&T Bell Labs and AT&T Research, where she worked on the AT&T Communicator project, developing a new architecture for spoken dialogue systems and statistical methods for dialogue management and generation. Walker has published more than 200 papers and has 10 U.S. patents granted. She earned a B.A. in Computer and Information science at UC Santa Cruz, M.S. in Computer science at Stanford University, and a M.A. in Linguistics and Ph.D. in Computer Science at the University of Pennsylvania.

Her Program :

  • Course master level, Tuesday, december 11 2018 - 10:15am at noon
  • Place : LIMSI - building 507 - conference room - Belvedère road - 91405 Orsay
    • Plan accès
    • Title : Neural Generation of Diverse Questions using Answer Focus, Contextual and Linguistic Features
    • Abstract :
Question Generation is the task of automatically creating questions from textual input. In this work we present a new Attentional Encoder--Decoder Recurrent Neural Network model for automatic question generation. Our model incorporates linguistic features and an additional sentence embedding to capture meaning at both sentence and word levels. The linguistic features are designed to capture information related to named entity recognition, word case, and entity coreference resolution. In addition our model uses a copying mechanism and a special answer signal that enables generation of numerous diverse questions on a given sentence. Our model achieves state of the art results of 19.98 Bleu_4 on a benchmark Question Generation dataset, outperforming all previously published results by a significant margin. A human evaluation also shows that these added features improve the quality of the generated questions.