long paper

Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations

Context is everything, even in commonsense moral reasoning. Changing contexts can flip the moral judgment of an action; Lying to a friend is wrong in general, but may be morally acceptable if it is intended to protect their life. We present …

SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization

We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In …

Quark: Controllable Text Generation with Reinforced Unlearning

Large-scale language models often learn behaviors that are misaligned with user expectations. Generated text may contain offensive or toxic language, contain significant repetition, or be of a different sentiment than desired by the user. We consider …

ProsocialDialog: A Prosocial Backbone for Conversational Agents

Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them. To address this issue, we introduce ProsocialDialog, the first large-scale multi-turn dialogue dataset …

Aligning to Social Norms and Values in Interactive Narratives

We focus on creating interactive agents that act in alignment with normative, socially acceptable values in textual environments. Such agents are often trained via reinforcement learning to optimize task performance at any cost, even when such …

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

The dominant paradigm for neural text generation is left-to-right decoding from autoregressive language models. Constrained or controllable generation under complex lexical constraints, however, requires foresight to plan ahead feasible future paths. …

Symbolic Knowledge Distillation: from General Language Models to Commonsense Models

The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, …

''I'm Not Mad'': Commonsense Implications of Negation and Contradiction

Natural language inference requires reasoning about contradictions, negations, and their commonsense implications. Given a simple premise (e.g., "I'm mad at you"), humans can reason about the varying shades of contradictory statements ranging from …

Delphi: Towards Machine Ethics and Norms

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state ("thou shalt not kill"), applying such rules to real-world situations is far more complex. For example, while "helping a friend" is …

EnglishBot: A Conversational AI System for Second Language Learning

Today, many students learn to speak a foreign language by listening to and repeating pre-recorded materials. This is due to the lack of practice opportunities with human partners. Leveraging recent advancements in AI, Speech, and NLP, we developed …