Liwei Jiang | 姜力炜
Liwei Jiang | 姜力炜
Home
Publications
Honors
CV
Ximing Lu
Latest
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions
Information-Theoretic Distillation for Reference-less Summarization
Position Paper: A Roadmap to Pluralistic Alignment
Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
The Generative AI Paradox: "What It Can Create, It May Not Understand"
Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Faith and Fate: Limits of Transformers on Compositionality
Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding over Small Language Models
NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Quark: Controllable Text Generation with Reinforced Unlearning
ProsocialDialog: A Prosocial Backbone for Conversational Agents
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
Cite
×