An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"
Обновлено 2024-07-25 14:07:31 +03:00
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Обновлено 2023-07-25 17:21:55 +03:00
Official Implementation of "A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining""
Обновлено 2023-06-12 21:16:08 +03:00
ICLR 2022 Paper, SOTA Table Pre-training Model, TAPEX: Table Pre-training via Learning a Neural SQL Executor
Обновлено 2023-02-06 11:06:18 +03:00
Large-scale pretraining for dialogue
Обновлено 2022-10-18 02:41:52 +03:00
Обновлено 2022-08-04 22:59:22 +03:00
BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generation can be uniformly regarded as to what extent previous tokens can be attended, and BANG bridges AR and NAR generation by designing a novel model structure for large-scale pretraining. The pretrained BANG model can simultaneously support AR, NAR and semi-NAR generation to meet different requirements.
Обновлено 2022-02-06 23:57:17 +03:00
State-of-the-art pretrained vision model from Bing Multimedia
Обновлено 2021-03-24 19:31:48 +03:00