TUTA and ForTaP for Structure-Aware and Numerical-Reasoning-Aware Table Pre-Training
Обновлено 2024-11-19 03:33:30 +03:00
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
nlp
multimodal
beit
beit-3
deepnet
document-ai
foundation-models
kosmos
kosmos-1
layoutlm
layoutxlm
llm
minilm
mllm
pre-trained-model
textdiffuser
trocr
unilm
xlm-e
Обновлено 2024-11-09 13:45:59 +03:00
An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"
video
localization
segmentation
caption-task
coin
joint
msrvtt
multimodal-sentiment-analysis
multimodality
pretrain
pretraining
retrieval-task
video-language
video-text
video-text-retrieval
youcookii
alignment
caption
Обновлено 2024-07-25 14:07:31 +03:00
Grounded Language-Image Pre-training
Обновлено 2024-01-24 07:56:13 +03:00
Oscar and VinVL
Обновлено 2023-08-28 04:34:58 +03:00
End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
nlp
pytorch
azure-machine-learning
language-model
bert
tuning
finetuning
pretraining
azureml-bert
bert-model
pretrained-models
Обновлено 2023-06-12 21:59:00 +03:00
TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)
Обновлено 2023-05-23 01:20:31 +03:00
ICLR 2022 Paper, SOTA Table Pre-training Model, TAPEX: Table Pre-training via Learning a Neural SQL Executor
Обновлено 2023-02-06 11:06:18 +03:00
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Обновлено 2022-11-28 22:09:50 +03:00
The official implementation of dual-view molecule pre-training.
Обновлено 2021-11-22 06:36:51 +03:00
MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf
Обновлено 2021-09-11 12:42:41 +03:00
An automated and scalable approach to generate tasklets from a natural language task query and a website URL. Glider does not require any pre-training. Glider models tasklet extraction as a state space search, where agents can explore a website’s UI and get rewarded when making progress towards task completion. The reward is computed based on the agent’s navigating pattern and the similarity between its trajectory and the task query.
Обновлено 2021-09-03 06:52:47 +03:00
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing
Обновлено 2021-08-30 22:08:54 +03:00
Multitask Multilingual Multimodal Pre-training
Обновлено 2021-05-13 09:56:36 +03:00