Yongil Kim

Seoul National University

Yongil's Research Blog


  1. [ICML2023] A Watermark for Large Language Models » 17 Dec 2023
  2. [ACL2023] Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions » 11 Sep 2023
  3. [ACL2023] FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue » 19 Aug 2023
  4. [ACL2022] An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented Dialogue Generation » 28 Feb 2023
  5. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen image Encoders and Large Language Models » 27 Feb 2023
  6. [NAACL2022] Database Search Results Disambiguation for Task-Oriented Dialog Systems » 21 Dec 2022
  7. [ICML2022] Data Determinces Distributional Robustness in Contrastive Language-Image Pre-training (CLIP) » 14 Nov 2022
  8. [ICML2022] NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework » 13 Nov 2022
  9. [ICML2022] Describing Differences between Text Distributions with Natural Language » 12 Nov 2022
  10. [ICML2022] What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization? » 11 Nov 2022
  11. [CVPR 2022 Tutorial] Denoising Diffusion-based Generative Modeling: Foundations and Applications(1) » 05 Nov 2022
  12. [ICML2022] Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts » 05 Nov 2022
  13. [ICML2022] Dialog Inpainting: Turning Documents into Dialogs » 01 Nov 2022
  14. [ICML2022] VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix » 31 Oct 2022
  15. [BEIT-3] Image as a Foreign Language: BEIT Pretraining for All Vision and Vision-Language Tasks » 06 Sep 2022