The Resource Natural language generation in interactive systems, edited by Amanda Stent, and Srinivas Bangalore, AT&T Research, Florham Park, New Jersey, USA

Natural language generation in interactive systems, edited by Amanda Stent, and Srinivas Bangalore, AT&T Research, Florham Park, New Jersey, USA

Label
Natural language generation in interactive systems
Title
Natural language generation in interactive systems
Statement of responsibility
edited by Amanda Stent, and Srinivas Bangalore, AT&T Research, Florham Park, New Jersey, USA
Contributor
Editor of compilation
Subject
Language
eng
Summary
"An informative and comprehensive overview of the state-of-the-art in natural language generation (NLG) for interactive systems, this guide serves to introduce graduate students and new researchers to the field of natural language processing and artificial intelligence, while inspiring them with ideas for future research. Detailing the techniques and challenges of NLG for interactive applications, it focuses on the research into systems that model collaborativity and uncertainty, are capable of being scaled incrementally, and can engage with the user effectively. A range of real-world case studies is also included. The book and the accompanying website feature a comprehensive bibliography, and refer the reader to corpora, data, software and other resources for pursuing research on natural language generation and interactive systems, including dialog systems, multimodal interfaces and assistive technologies. It is an ideal resource for students and researchers in computational linguistics, natural language processing and related fields"--
Assigning source
Provided by publisher
Cataloging source
NhCcYBP
Index
index present
LC call number
QA76.9.N38
LC item number
N382 2014
Literary form
non fiction
Nature of contents
  • dictionaries
  • bibliography
http://library.link/vocab/relatedWorkOrContributorDate
  • 1974-
  • 1969-
http://library.link/vocab/relatedWorkOrContributorName
  • Stent, Amanda
  • Bangalore, Srinivas
  • Cambridge University Press
http://library.link/vocab/subjectName
  • Natural language processing (Computer science)
  • Interactive computer systems
Label
Natural language generation in interactive systems, edited by Amanda Stent, and Srinivas Bangalore, AT&T Research, Florham Park, New Jersey, USA
Instantiates
Publication
Bibliography note
Includes bibliographical references and index
Carrier category
online resource
Carrier category code
  • cr
Carrier MARC source
rdacarrier
Content category
text
Content type code
  • txt
Content type MARC source
rdacontent
Contents
  • Machine generated contents note: 1.Introduction / Srinivas Bangalore -- 1.1.Natural language generation -- 1.2.Interactive systems -- 1.3.Natural language generation for interactive systems -- 1.3.1.Collaboration -- 1.3.2.Reference -- 1.3.3.Handling uncertainty -- 1.3.4.Engagement -- 1.3.5.Evaluation and shared tasks -- 1.4.Summary -- References -- pt. I Joint construction -- 2.Communicative intentions and natural language generation / Nate Blaylock -- 2.1.Introduction -- 2.2.What are communicative intentions? -- 2.3.Communicative intentions in interactive systems -- 2.3.1.Fixed-task models -- 2.3.2.Plan-based models -- 2.3.3.Conversation Acts Theory -- 2.3.4.Rational behavior models -- 2.4.Modeling communicative intentions with problem solving -- 2.4.1.Collaborative problem solving -- 2.4.2.Collaborative problem solving state -- 2.4.3.Grounding -- 2.4.4.Communicative intentions -- 2.5.Implications of collaborative problem solving for NLG -- 2.6.Conclusions and future work -- References -- 3.Pursuing and demonstrating understanding in dialogue / Matthew Stone -- 3.1.Introduction -- 3.2.Background -- 3.2.1.Grounding behaviors -- 3.2.2.Grounding as a collaborative process -- 3.2.3.Grounding as problem solving -- 3.3.An NLG model for flexible grounding -- 3.3.1.Utterances and contributions -- 3.3.2.Modeling uncertainty in interpretation -- 3.3.3.Generating under uncertainty -- 3.3.4.Examples -- 3.4.Alternative approaches -- 3.4.1.Incremental common ground -- 3.4.2.Probabilistic inference -- 3.4.3.Correlating conversational success with grounding features -- 3.5.Future challenges -- 3.5.1.Explicit multimodal grounding -- 3.5.2.Implicit multimodal grounding -- 3.5.3.Grounding through task action -- 3.6.Conclusions -- References -- 4.Dialogue and compound contributions / Eleni Gregoromichelaki -- 4.1.Introduction -- 4.2.Compound contributions -- 4.2.1.Introduction -- 4.2.2.Data -- 4.2.3.Incremental interpretation vs. incremental representation -- 4.2.4.CCs and intentions -- 4.2.5.CCs and coordination -- 4.2.6.Implications for NLG -- 4.3.Previous work -- 4.3.1.Psycholinguistic research -- 4.3.2.Incrementality in NLG -- 4.3.3.Interleaving parsing and generation -- 4.3.4.Incremental NLG for dialogue -- 4.3.5.Computational and formal approaches -- 4.3.6.Summary -- 4.4.Dynamic Syntax (DS) and Type Theory with Records (TTR) -- 4.4.1.Dynamic Syntax -- 4.4.2.Meeting the criteria -- 4.5.Generating compound contributions -- 4.5.1.The DyLan dialogue system -- 4.5.2.Parsing and generation co-constructing a shared data structure -- 4.5.3.Speaker transition points -- 4.6.Conclusions and implications for NLG systems -- References -- pt. II Reference -- 5.Referability / Kees van Deemter -- 5.1.Introduction -- 5.2.An algorithm for generating boolean referring expressions -- 5.3.Adding proper names to REG -- 5.4.Knowledge representation -- 5.4.1.Relational descriptions -- 5.4.2.Knowledge representation and REG -- 5.4.3.Description Logic for REG -- 5.5.Referability -- 5.6.Why study highly expressive REG algorithms? -- 5.6.1.Sometimes the referent could not be identified before -- 5.6.2.Sometimes they generate simpler referring expressions -- 5.6.3.Simplicity is not everything -- 5.6.4.Complex content does not always require a complex form -- 5.6.5.Characterizing linguistic competence -- 5.7.Whither REG? -- References -- 6.Referring expression generation in interaction: A graph-based perspective / Mariet Theune -- 6.1.Introduction -- 6.1.1.Referring expression generation -- 6.1.2.Preferences versus adaptation in reference -- 6.2.Graph-based referring expression generation -- 6.2.1.Scene graphs -- 6.2.2.Referring graphs -- 6.2.3.Formalizing reference in terms of subgraph isomorphism -- 6.2.4.Cost functions -- 6.2.5.Algorithm -- 6.2.6.Discussion -- 6.3.Determining preferences and computing costs -- 6.4.Adaptation and interaction -- 6.4.1.Experiment I: adaptation and attribute selection -- 6.4.2.Experiment II: adaptation and overspecification -- 6.5.General discussion -- 6.6.Conclusion -- References -- pt. III Handling uncertainty -- 7.Reinforcement learning approaches to natural language generation in interactive systems / Verena Rieser -- 7.1.Motivation -- 7.1.1.Background: Reinforcement learning approaches to NLG -- 7.1.2.Previous work in adaptive NLG -- 7.2.Adaptive information presentation -- 7.2.1.Corpus -- 7.2.2.User simulations for training NLG -- 7.2.3.Data-driven reward function -- 7.2.4.Reinforcement learning experiments -- 7.2.5.Results: Simulated users -- 7.2.6.Results: Real users -- 7.3.Adapting to unknown users in referring expression generation -- 7.3.1.Corpus -- 7.3.2.Dialogue manager and generation modules -- 7.3.3.Referring expression generation module -- 7.3.4.User simulations -- 7.3.5.Training the referring expression generation module -- 7.3.6.Evaluation with real users -- 7.4.Adaptive temporal referring expressions -- 7.4.1.Corpus -- 7.4.2.User simulation -- 7.4.3.Evaluation with real users -- 7.5.Research directions -- 7.6.Conclusions -- References -- 8.A joint learning approach for situated language generation / Heriberto Cuayahuitl -- 8.1.Introduction -- 8.2.Give -- 8.2.1.The Give-2 corpus -- 8.2.2.Natural language generation for Give -- 8.2.3.Data annotation and baseline NLG system -- 8.3.Hierarchical reinforcement learning for NLG -- 8.3.1.An example -- 8.3.2.Reinforcement learning with a flat state--action space -- 8.3.3.Reinforcement learning with a hierarchical state--action space -- 8.4.Hierarchical reinforcement learning for Give -- 8.4.1.Experimental setting -- 8.4.2.Experimental results -- 8.5.Hierarchical reinforcement learning and HMMs for Give -- 8.5.1.Hidden Markov models for surface realization -- 8.5.2.Retraining the learning agent -- 8.5.3.Results -- 8.6.Discussion -- 8.7.Conclusions and future work -- References -- pt. IV Engagement -- 9.Data-driven methods for linguistic style control / Francois Mairesse -- 9.1.Introduction -- 9.2.Personage: personality-dependent linguistic control -- 9.3.Learning to control a handcrafted generator from data -- 9.3.1.Overgenerate and rank -- 9.3.2.Parameter estimation models -- 9.4.Learning a generator from data using factored language models -- 9.5.Discussion and future challenges -- References -- 10.Integration of cultural factors into the behavioral models of virtual characters / Elisabeth Andre -- 10.1.Introduction -- 10.2.Culture and communicative behaviors -- 10.2.1.Levels of culture -- 10.2.2.Cultural dichotomies -- 10.2.3.Hofstede's dimensional model and synthetic cultures -- 10.3.Levels of cultural adaptation -- 10.3.1.Culture-specific adaptation of context -- 10.3.2.Culture-specific adaptation of form -- 10.3.3.Culture-specific communication management -- 10.4.Approaches to culture-specific modeling for embodied virtual agents -- 10.4.1.Top-down approaches -- 10.4.2.Bottom-up approaches -- 10.5.A hybrid approach to integrating culture-specific behaviors into virtual agents -- 10.5.1.Cultural profiles for Germany and Japan -- 10.5.2.Behavioral expectations for Germany and Japan -- 10.5.3.Formalization of culture-specific behavioral differences -- 10.5.4.Computational models for culture-specific conversational behaviors -- 10.5.5.Simulation -- 10.5.6.Evaluation -- 10.6.Conclusions -- References -- 11.Natural language generation for augmentative and assistive technologies / Annalu Waller -- 11.1.Introduction -- 11.2.Background on augmentative and alternative communication -- 11.2.1.State of the art -- 11.2.2.Related research -- 11.2.3.Diversity in users of AAC -- 11.2.4.Other AAC challenges -- 11.3.Application areas of NLG in AAC -- 11.3.1.Helping AAC users communicate -- 11.3.2.Teaching communication skills to AAC users -- 11.3.3.Accessibility: Helping people with visual impairments access information -- 11.3.4.Summary -- 11.4.Example project: "How was School Today...?" -- 11.4.1.Use case -- 11.4.2.Example interaction -- 11.4.3.NLG in "How was School Today...?" -- 11.4.4.Current work on "How was School Today...?" -- 11.5.Challenges for NLG and AAC -- 11.5.1.Supporting social interaction -- 11.5.2.Narrative -- 11.5.3.User personalization -- 11.5.4.System evaluation -- 11.5.5.Interaction and dialogue -- 11.6.Conclusions -- References -- pt. V Evaluation and shared tasks -- 12.Eye tracking for the online evaluation of prosody in speech synthesis / Shari R. Speer -- 12.1.Introduction -- 12.2.Experiment -- 12.2.1.Design and materials -- 12.2.2.Participants and eye-tracking procedure -- 12.3.Results -- 12.4.Interim discussion -- 12.5.Offline ratings -- 12.5.1.Design and materials -- 12.5.2.Results -- 12.6.Acoustic analysis using Generalized Linear Mixed Models (GLMMs) -- 12.6.1.Acoustic factors and looks to the area of interest -- 12.6.2.Relationship between ratings and looks -- 12.6.3.Correlation between rating and acoustic factors -- 12.7.Discussion -- 12.8.Conclusions -- References -- 13.Comparative evaluation and shared tasks for NLG in interactive systems / Helen Hastie -- 13.1.Introduction -- 13.2.A categorization framework for evaluations of automatically generated language -- 13.2.1.Evaluation measures -- 13.2.2.Higher-level quality criteria -- 13.2.3.Evaluation frameworks -- 13.2.4.Concluding comments -- 13.3.An overview of evaluation and shared tasks in NLG -- 13.3.1.Component evaluation: Referring Expression Generation -- 13.3.2.Component evaluation: Surface Realization -- 13.3.3.End-to-end NLG systems: data-to-text generation -- 13.3.4.End-to-end NLG systems: text-to-text generation -- 13.3.5.Embedded NLG components -- 13.3.6.Embedded NLG components: the Give shared task -- 13.3.7.Concluding comments -- 13.4.An overview of evaluation for spoken dialogue systems -- 13.4.1.Introduction -- 13.4.2.Realism and control -- 13.4.3.Evaluation frameworks -- 13.4.4.Shared tasks -- 13.4.5.Discussion -- 13.4.6.Concluding comments -- 13.5.A methodology for comparative evaluation of NLG components in interactive systems --
  • Contents note continued: 13.5.1.Evaluation model design -- 13.5.2.An evaluation model for comparative evaluation of NLG modules in interactive systems -- 13.5.3.Context-independent intrinsic output quality -- 13.5.4.Context-dependent intrinsic output quality -- 13.5.5.User satisfaction -- 13.5.6.Task effectiveness and efficiency -- 13.5.7.System purpose success -- 13.5.8.A proposal for a shared task on referring expression generation dialogue context -- 13.5.9.GRUVE: A shared task on instruction giving in pedestrian navigation -- 13.5.10.Concluding comments -- 13.6.Conclusion -- References
Dimensions
unknown
Extent
1 online resource (xvii, 363 pages.) :
Form of item
online
Isbn
9780511844492
Isbn Type
(electronic bk.)
Media category
computer
Media MARC source
rdamedia
Media type code
  • c
Reproduction note
Electronic reproduction.
Specific material designation
remote
Stock number
99961392166
System control number
(NhCcYBP)11970854
Label
Natural language generation in interactive systems, edited by Amanda Stent, and Srinivas Bangalore, AT&T Research, Florham Park, New Jersey, USA
Publication
Bibliography note
Includes bibliographical references and index
Carrier category
online resource
Carrier category code
  • cr
Carrier MARC source
rdacarrier
Content category
text
Content type code
  • txt
Content type MARC source
rdacontent
Contents
  • Machine generated contents note: 1.Introduction / Srinivas Bangalore -- 1.1.Natural language generation -- 1.2.Interactive systems -- 1.3.Natural language generation for interactive systems -- 1.3.1.Collaboration -- 1.3.2.Reference -- 1.3.3.Handling uncertainty -- 1.3.4.Engagement -- 1.3.5.Evaluation and shared tasks -- 1.4.Summary -- References -- pt. I Joint construction -- 2.Communicative intentions and natural language generation / Nate Blaylock -- 2.1.Introduction -- 2.2.What are communicative intentions? -- 2.3.Communicative intentions in interactive systems -- 2.3.1.Fixed-task models -- 2.3.2.Plan-based models -- 2.3.3.Conversation Acts Theory -- 2.3.4.Rational behavior models -- 2.4.Modeling communicative intentions with problem solving -- 2.4.1.Collaborative problem solving -- 2.4.2.Collaborative problem solving state -- 2.4.3.Grounding -- 2.4.4.Communicative intentions -- 2.5.Implications of collaborative problem solving for NLG -- 2.6.Conclusions and future work -- References -- 3.Pursuing and demonstrating understanding in dialogue / Matthew Stone -- 3.1.Introduction -- 3.2.Background -- 3.2.1.Grounding behaviors -- 3.2.2.Grounding as a collaborative process -- 3.2.3.Grounding as problem solving -- 3.3.An NLG model for flexible grounding -- 3.3.1.Utterances and contributions -- 3.3.2.Modeling uncertainty in interpretation -- 3.3.3.Generating under uncertainty -- 3.3.4.Examples -- 3.4.Alternative approaches -- 3.4.1.Incremental common ground -- 3.4.2.Probabilistic inference -- 3.4.3.Correlating conversational success with grounding features -- 3.5.Future challenges -- 3.5.1.Explicit multimodal grounding -- 3.5.2.Implicit multimodal grounding -- 3.5.3.Grounding through task action -- 3.6.Conclusions -- References -- 4.Dialogue and compound contributions / Eleni Gregoromichelaki -- 4.1.Introduction -- 4.2.Compound contributions -- 4.2.1.Introduction -- 4.2.2.Data -- 4.2.3.Incremental interpretation vs. incremental representation -- 4.2.4.CCs and intentions -- 4.2.5.CCs and coordination -- 4.2.6.Implications for NLG -- 4.3.Previous work -- 4.3.1.Psycholinguistic research -- 4.3.2.Incrementality in NLG -- 4.3.3.Interleaving parsing and generation -- 4.3.4.Incremental NLG for dialogue -- 4.3.5.Computational and formal approaches -- 4.3.6.Summary -- 4.4.Dynamic Syntax (DS) and Type Theory with Records (TTR) -- 4.4.1.Dynamic Syntax -- 4.4.2.Meeting the criteria -- 4.5.Generating compound contributions -- 4.5.1.The DyLan dialogue system -- 4.5.2.Parsing and generation co-constructing a shared data structure -- 4.5.3.Speaker transition points -- 4.6.Conclusions and implications for NLG systems -- References -- pt. II Reference -- 5.Referability / Kees van Deemter -- 5.1.Introduction -- 5.2.An algorithm for generating boolean referring expressions -- 5.3.Adding proper names to REG -- 5.4.Knowledge representation -- 5.4.1.Relational descriptions -- 5.4.2.Knowledge representation and REG -- 5.4.3.Description Logic for REG -- 5.5.Referability -- 5.6.Why study highly expressive REG algorithms? -- 5.6.1.Sometimes the referent could not be identified before -- 5.6.2.Sometimes they generate simpler referring expressions -- 5.6.3.Simplicity is not everything -- 5.6.4.Complex content does not always require a complex form -- 5.6.5.Characterizing linguistic competence -- 5.7.Whither REG? -- References -- 6.Referring expression generation in interaction: A graph-based perspective / Mariet Theune -- 6.1.Introduction -- 6.1.1.Referring expression generation -- 6.1.2.Preferences versus adaptation in reference -- 6.2.Graph-based referring expression generation -- 6.2.1.Scene graphs -- 6.2.2.Referring graphs -- 6.2.3.Formalizing reference in terms of subgraph isomorphism -- 6.2.4.Cost functions -- 6.2.5.Algorithm -- 6.2.6.Discussion -- 6.3.Determining preferences and computing costs -- 6.4.Adaptation and interaction -- 6.4.1.Experiment I: adaptation and attribute selection -- 6.4.2.Experiment II: adaptation and overspecification -- 6.5.General discussion -- 6.6.Conclusion -- References -- pt. III Handling uncertainty -- 7.Reinforcement learning approaches to natural language generation in interactive systems / Verena Rieser -- 7.1.Motivation -- 7.1.1.Background: Reinforcement learning approaches to NLG -- 7.1.2.Previous work in adaptive NLG -- 7.2.Adaptive information presentation -- 7.2.1.Corpus -- 7.2.2.User simulations for training NLG -- 7.2.3.Data-driven reward function -- 7.2.4.Reinforcement learning experiments -- 7.2.5.Results: Simulated users -- 7.2.6.Results: Real users -- 7.3.Adapting to unknown users in referring expression generation -- 7.3.1.Corpus -- 7.3.2.Dialogue manager and generation modules -- 7.3.3.Referring expression generation module -- 7.3.4.User simulations -- 7.3.5.Training the referring expression generation module -- 7.3.6.Evaluation with real users -- 7.4.Adaptive temporal referring expressions -- 7.4.1.Corpus -- 7.4.2.User simulation -- 7.4.3.Evaluation with real users -- 7.5.Research directions -- 7.6.Conclusions -- References -- 8.A joint learning approach for situated language generation / Heriberto Cuayahuitl -- 8.1.Introduction -- 8.2.Give -- 8.2.1.The Give-2 corpus -- 8.2.2.Natural language generation for Give -- 8.2.3.Data annotation and baseline NLG system -- 8.3.Hierarchical reinforcement learning for NLG -- 8.3.1.An example -- 8.3.2.Reinforcement learning with a flat state--action space -- 8.3.3.Reinforcement learning with a hierarchical state--action space -- 8.4.Hierarchical reinforcement learning for Give -- 8.4.1.Experimental setting -- 8.4.2.Experimental results -- 8.5.Hierarchical reinforcement learning and HMMs for Give -- 8.5.1.Hidden Markov models for surface realization -- 8.5.2.Retraining the learning agent -- 8.5.3.Results -- 8.6.Discussion -- 8.7.Conclusions and future work -- References -- pt. IV Engagement -- 9.Data-driven methods for linguistic style control / Francois Mairesse -- 9.1.Introduction -- 9.2.Personage: personality-dependent linguistic control -- 9.3.Learning to control a handcrafted generator from data -- 9.3.1.Overgenerate and rank -- 9.3.2.Parameter estimation models -- 9.4.Learning a generator from data using factored language models -- 9.5.Discussion and future challenges -- References -- 10.Integration of cultural factors into the behavioral models of virtual characters / Elisabeth Andre -- 10.1.Introduction -- 10.2.Culture and communicative behaviors -- 10.2.1.Levels of culture -- 10.2.2.Cultural dichotomies -- 10.2.3.Hofstede's dimensional model and synthetic cultures -- 10.3.Levels of cultural adaptation -- 10.3.1.Culture-specific adaptation of context -- 10.3.2.Culture-specific adaptation of form -- 10.3.3.Culture-specific communication management -- 10.4.Approaches to culture-specific modeling for embodied virtual agents -- 10.4.1.Top-down approaches -- 10.4.2.Bottom-up approaches -- 10.5.A hybrid approach to integrating culture-specific behaviors into virtual agents -- 10.5.1.Cultural profiles for Germany and Japan -- 10.5.2.Behavioral expectations for Germany and Japan -- 10.5.3.Formalization of culture-specific behavioral differences -- 10.5.4.Computational models for culture-specific conversational behaviors -- 10.5.5.Simulation -- 10.5.6.Evaluation -- 10.6.Conclusions -- References -- 11.Natural language generation for augmentative and assistive technologies / Annalu Waller -- 11.1.Introduction -- 11.2.Background on augmentative and alternative communication -- 11.2.1.State of the art -- 11.2.2.Related research -- 11.2.3.Diversity in users of AAC -- 11.2.4.Other AAC challenges -- 11.3.Application areas of NLG in AAC -- 11.3.1.Helping AAC users communicate -- 11.3.2.Teaching communication skills to AAC users -- 11.3.3.Accessibility: Helping people with visual impairments access information -- 11.3.4.Summary -- 11.4.Example project: "How was School Today...?" -- 11.4.1.Use case -- 11.4.2.Example interaction -- 11.4.3.NLG in "How was School Today...?" -- 11.4.4.Current work on "How was School Today...?" -- 11.5.Challenges for NLG and AAC -- 11.5.1.Supporting social interaction -- 11.5.2.Narrative -- 11.5.3.User personalization -- 11.5.4.System evaluation -- 11.5.5.Interaction and dialogue -- 11.6.Conclusions -- References -- pt. V Evaluation and shared tasks -- 12.Eye tracking for the online evaluation of prosody in speech synthesis / Shari R. Speer -- 12.1.Introduction -- 12.2.Experiment -- 12.2.1.Design and materials -- 12.2.2.Participants and eye-tracking procedure -- 12.3.Results -- 12.4.Interim discussion -- 12.5.Offline ratings -- 12.5.1.Design and materials -- 12.5.2.Results -- 12.6.Acoustic analysis using Generalized Linear Mixed Models (GLMMs) -- 12.6.1.Acoustic factors and looks to the area of interest -- 12.6.2.Relationship between ratings and looks -- 12.6.3.Correlation between rating and acoustic factors -- 12.7.Discussion -- 12.8.Conclusions -- References -- 13.Comparative evaluation and shared tasks for NLG in interactive systems / Helen Hastie -- 13.1.Introduction -- 13.2.A categorization framework for evaluations of automatically generated language -- 13.2.1.Evaluation measures -- 13.2.2.Higher-level quality criteria -- 13.2.3.Evaluation frameworks -- 13.2.4.Concluding comments -- 13.3.An overview of evaluation and shared tasks in NLG -- 13.3.1.Component evaluation: Referring Expression Generation -- 13.3.2.Component evaluation: Surface Realization -- 13.3.3.End-to-end NLG systems: data-to-text generation -- 13.3.4.End-to-end NLG systems: text-to-text generation -- 13.3.5.Embedded NLG components -- 13.3.6.Embedded NLG components: the Give shared task -- 13.3.7.Concluding comments -- 13.4.An overview of evaluation for spoken dialogue systems -- 13.4.1.Introduction -- 13.4.2.Realism and control -- 13.4.3.Evaluation frameworks -- 13.4.4.Shared tasks -- 13.4.5.Discussion -- 13.4.6.Concluding comments -- 13.5.A methodology for comparative evaluation of NLG components in interactive systems --
  • Contents note continued: 13.5.1.Evaluation model design -- 13.5.2.An evaluation model for comparative evaluation of NLG modules in interactive systems -- 13.5.3.Context-independent intrinsic output quality -- 13.5.4.Context-dependent intrinsic output quality -- 13.5.5.User satisfaction -- 13.5.6.Task effectiveness and efficiency -- 13.5.7.System purpose success -- 13.5.8.A proposal for a shared task on referring expression generation dialogue context -- 13.5.9.GRUVE: A shared task on instruction giving in pedestrian navigation -- 13.5.10.Concluding comments -- 13.6.Conclusion -- References
Dimensions
unknown
Extent
1 online resource (xvii, 363 pages.) :
Form of item
online
Isbn
9780511844492
Isbn Type
(electronic bk.)
Media category
computer
Media MARC source
rdamedia
Media type code
  • c
Reproduction note
Electronic reproduction.
Specific material designation
remote
Stock number
99961392166
System control number
(NhCcYBP)11970854

Library Locations

  • African Studies LibraryBorrow it
    771 Commonwealth Avenue, 6th Floor, Boston, MA, 02215, US
    42.350723 -71.108227
  • Alumni Medical LibraryBorrow it
    72 East Concord Street, Boston, MA, 02118, US
    42.336388 -71.072393
  • Astronomy LibraryBorrow it
    725 Commonwealth Avenue, 6th Floor, Boston, MA, 02445, US
    42.350259 -71.105717
  • Fineman and Pappas Law LibrariesBorrow it
    765 Commonwealth Avenue, Boston, MA, 02215, US
    42.350979 -71.107023
  • Frederick S. Pardee Management LibraryBorrow it
    595 Commonwealth Avenue, Boston, MA, 02215, US
    42.349626 -71.099547
  • Howard Gotlieb Archival Research CenterBorrow it
    771 Commonwealth Avenue, 5th Floor, Boston, MA, 02215, US
    42.350723 -71.108227
  • Mugar Memorial LibraryBorrow it
    771 Commonwealth Avenue, Boston, MA, 02215, US
    42.350723 -71.108227
  • Music LibraryBorrow it
    771 Commonwealth Avenue, 2nd Floor, Boston, MA, 02215, US
    42.350723 -71.108227
  • Pikering Educational Resources LibraryBorrow it
    2 Silber Way, Boston, MA, 02215, US
    42.349804 -71.101425
  • School of Theology LibraryBorrow it
    745 Commonwealth Avenue, 2nd Floor, Boston, MA, 02215, US
    42.350494 -71.107235
  • Science & Engineering LibraryBorrow it
    38 Cummington Mall, Boston, MA, 02215, US
    42.348472 -71.102257
  • Stone Science LibraryBorrow it
    675 Commonwealth Avenue, Boston, MA, 02445, US
    42.350103 -71.103784
Processing Feedback ...