Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Human-Computer Interaction

Advancing GUI for Generative AI: Charting the Design Space of Human-AI Interactions through Task Creativity and Complexity.

Advancing GUI for Generative AI: Charting the Design Space of Human-AI Interactions through Task Creativity and Complexity.

This study examines human-AI collaboration in text generation tasks, specifically news headline co-creation using large language models (LLMs). The authors categorize tasks into three levels of creativity and complexity: fixed-scope content curation, atomic creative tasks, and complex and interdependent tasks. They investigate various interaction methods, including selection, post-editing, and interactive editing, to understand how they relate to the creative and complex nature of tasks. The authors’ findings suggest that guidance, post-editing, and interactive editing are particularly effective in improving task performance, while maintaining a similar level of perceived trust and control among participants.

Fixed-Scope Content Curation Tasks

  • Author Only: The author creates the narrative without AI assistance.

Atomic Creative Tasks

  • Cross-Domain Analogy Generation: Participants generate analogies between unrelated domains, such as a cat and a car.

Human-AI Interaction Modes

  • Guiding Model Output: Participants provide feedback to the AI model to improve its output.
  • Selecting or Rating Model Output: Participants choose from a list of options provided by the AI model.
  • Post-Editing: Participants edit the AI-generated content to improve its quality.
  • Interactive Editing Initiated by AI: Participants interact with the AI model to generate new content based on their feedback.
  • Writing with Model Assistance Initiated by Humans: Participants use the AI model as a tool to help them write content.
    Related Work: Cheng et al.’s taxonomy delineates five prevalent interactions in text summarization, including guiding model output, selecting or rating model output, post-editing, interactive editing initiated by AI, and writing with model assistance initiated by humans.
    In conclusion, the study demonstrates that human-AI collaboration can lead to improved task performance in text generation tasks, particularly when participants have the opportunity to interact with the AI model and provide feedback. The findings of this study have implications for the development of more intuitive and structured methods for presenting complex data analysis results, making them accessible and understandable to a wider audience.