Xiachong Feng 冯夏冲

Currently, I am a Postdoc Fellow of Natural Language Processing Group, The University of Hong Kong (HKU NLP), working with Prof. Lingpeng Kong and Prof. Chuan Wu.

Previously, I am a Ph.D student from Harbin Institute of Technology, where I am a member of Text Generation Group of HIT-SCIR Lab under the supervision of Prof. Bing Qin and Prof. Xiaocheng Feng.

Email  /  CV  /  Google Scholar  /  Github  /  Twitter

profile photo
Selected Publication
Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization
Xiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Aligning Semantic in Brain and Language: A Curriculum Contrastive Method for Electroencephalography-to-Text Generation
Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu
IEEE Transactions on Neural Systems and Rehabilitation Engineering (TNSRE)
A Survey on Dialogue Summarization: Recent Advances and New Frontiers
Xiachong Feng, Xiaocheng Feng, Bing Qin
IJCAI 2022
[paper] [blog]
Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization
Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
ACL 2021
[paper] [code]
Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization
Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng
IJCAI 2021
[paper] [code]

  • National Scholarship for Ph.D., 2021
  • Outstanding Graduate Award at Provincial Level, 2018
  • Outstanding Student Cadres at Provincial Level, 2017
  • National Scholarship for B.E., 2017
  • National Scholarship for B.E., 2016
  • Education
    Harbin Institute of Technology

    • Ph.D in Computer Science
    • 2018.09 ~ 2023.12

    • B.E in Software Engineering
    • 2014.09 ~ 2018.06
    National University of Singapore

    • Remote Intern
    • 2020.08 ~ 2023.06

  • Jul. 2024, I will give a talk "Strategic Reasoning of Large Language Models from a Game Theory Perspective" at the CCF 2024 Annual Conference on Computational Economics.
  • Sep. 2023, I gave a talk "Taking the First Step Toward Interdisciplinary Studies" at the 2023 China Conference on Social Media Processing.
  • May 2023, I gave a talk "Recent Advances in Large Language Models" at Interactive Data Exploration System (IDEAS) lab, Shandong University.
  • Jan. 2022, I gave a talk "Dialogue Summarizations" at the Alibaba DAMO Academy, Language Technology Lab, Conversational AI.
  • Jun. 2021, I gave a talk "Dialogue Summarizations" at the Natural Language Processing Lab at Tsinghua University: THUNLP.
  • Presentation

  • Recent Advances in Large Languag Models [PDF]
  • ChatGPT Evaluation for NLP: A Meta Survey [PDF] [Google Doc]
  • Federated Learning Meets Knowledge Distillation [PDF]
  • Dialogue Summarization(2022.1) [PDF]
  • Prompt Survey: [PDF] [PPTX]
  • Dialogue Summarization(2021.5) [PDF]
  • 对话摘要最新进展简述 [PDF] [Blog]
  • Cross-lingual Summarization [PDF]
  • Efficient Transformers [PDF]
  • Multi-modal Summarization [PDF]
  • ACL20 Summarization [PDF]
  • Data Augmentation [PDF]
  • 文本摘要简述 [PDF]
  • ACL19 Summarization [PDF]
  • Graph Neural Networks [PDF]
  • Knowledge Distillation [PDF]
  • Meta Learning [PDF]
  • Non-Autoregressive Decoding [PDF]
  • Event Extraction [PDF]
  • Advanced Pre-training Language Models: a Brief Introduction [PDF]
  • Paper Slides

  • Language Models Don’t Always Say What They Think Unfaithful Explanations in Chain-of-Thought Prompting [PDF]
  • Cognitive Architectures for Language Agents [PDF]
  • Do Androids Laugh at Electric Sheep? Humor “Understanding” Benchmarks from The New Yorker Caption Contest [PDF]
  • Evidence of a predictive coding hierarchy in the human brain listening to speech [Google Doc]
  • BrainBERT: Self-supervised representation learning for Intracranial Electrodes [PDF]
  • switch-GLAT: Multilingual Parallel Machine Translation Via Code-Switch Decoder [PDF]
  • Memorizing Transformers [PDF]
  • One Model, Multiple Tasks: Pathways for Natural Language Understanding [PDF]
  • CPT Colorful Prompt Tuning for Pre-trained Vision-Language Models [PPTX]
  • Contrastive Learning with Adversarial Perturbations for Conditional Text Generation [PDF]
  • Making Pre-trained Language Models Better Few-shot Learners [PDF]
  • Group-wise Contrastive Learning for Neural Dialogue Generation [PDF]
  • Heterogeneous Graph Transformer [PDF]
  • Plug and Play Language Model- A Simple Baseline for Controlled Language Generation [PDF]
  • Multimodal Abstractive Summarization for How2 Videos [PDF]
  • A Simple Theoretical Model of Importance for Summarization [PDF]
  • Text Generation from Knowledge Graphs with Graph Transformers [PDF]
  • The Curious Case of Neural Text Degeneration [PDF]
  • Episodic Memory in Lifelong Language Learning [PDF]
  • DialogueGCN-A Graph Convolutional Neural Network for Emotion Recognition in Conversation [PDF]
  • Commonsense for Generative Multi-Hop Question Answering Tasks [PDF]
  • Commonsense Knowledge Aware Conversation Generation with Graph Attention [PDF]
  • Deep Multitask Learning for Semantic Dependency Parsing [PDF]
  • Dynamically Fused Graph Network for Multi-hop Reasoning [PDF]
  • Emotional Chatting Machine Emotional Conversation Generation with Internal and External Memory [PDF]
  • Learning Neural Templates for Text Generation [PDF]
  • Learning to Ask Questions in Open-domain Conversational Systems with Typed Decoders [PDF]
  • Linguistic Knowledge and Transferability of Contextual Representations [PDF]
  • Multi-Domain Neural Machine Translation with Word-Level Domain Context Discrimination [PDF]
  • Semi-Supervised QA with Generative Domain-Adaptive Nets [PDF]
  • Notes

  • EMNLP19 Summarization [PDF]
  • Cross-Lingual Word Embedding [PDF]
  • Brief Intro to Summarization [PDF]
  • GAN in Text Generation [PDF]
  • EMNLP19 and NIPS19 Notes [PDF]
  • Boosting [PDF]
  • The Maximum Entropy Model [PDF]
  • HMM [PDF]
  • ConceptNet [PDF]

  • Design and source code from Jon Barron.