Tutel MoE: An Optimized Mixture-of-Experts Implementation
Обновлено 2024-09-13 14:49:34 +03:00
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Обновлено 2024-09-04 08:42:52 +03:00
This package features data-science related tasks for developing new recognizers for Presidio. It is used for the evaluation of the entire system, as well as for evaluating specific PII recognizers or PII detection models.
Обновлено 2024-08-07 19:17:23 +03:00
Hierarchical Transformers for Knowledge Graph Embeddings (EMNLP 2021)
Обновлено 2024-07-25 14:00:19 +03:00
This is a collection of our NAS and Vision Transformer work.
Обновлено 2024-07-25 13:29:13 +03:00
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Обновлено 2024-07-15 18:00:32 +03:00
Graphormer is a general-purpose deep learning backbone for molecular modeling.
Обновлено 2024-05-28 09:22:34 +03:00
Federated Learning Utilities and Tools for Experimentation
Обновлено 2024-01-11 22:20:09 +03:00
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale
Обновлено 2023-12-20 23:40:01 +03:00
Table Transformer (TATR) is a deep learning model for extracting tables from unstructured documents (PDFs and images). This is also the official repository for the PubTables-1M dataset and GriTS evaluation metric.
Обновлено 2023-09-07 07:38:34 +03:00
EsViT: Efficient self-supervised Vision Transformers
Обновлено 2023-08-28 04:38:18 +03:00
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Обновлено 2023-07-25 17:21:55 +03:00
Generative Retrieval Transformer
Обновлено 2023-07-23 06:20:55 +03:00
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"
Обновлено 2023-07-07 01:02:19 +03:00
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
Обновлено 2023-06-13 00:30:31 +03:00
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
Обновлено 2023-06-13 00:30:31 +03:00
Transformer-based machine reading comprehension in combination with Azure Cognitive Search
Обновлено 2023-06-13 00:29:56 +03:00
Large-scale pretraining for dialogue
Обновлено 2022-10-18 02:41:52 +03:00
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
Обновлено 2022-09-29 18:17:40 +03:00
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.
Обновлено 2022-06-22 07:21:09 +03:00
FastFormers - highly efficient transformer models for NLU
Обновлено 2022-05-19 01:07:52 +03:00
[NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"
Обновлено 2022-03-27 08:21:56 +03:00
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022
Обновлено 2022-03-18 04:29:20 +03:00
Common PyTorch Modules
Обновлено 2021-10-18 13:52:28 +03:00
This package implements THOR: Transformer with Stochastic Experts.
Обновлено 2021-10-08 00:19:22 +03:00
A relation-aware semantic parsing model from English to SQL
Обновлено 2021-06-19 03:42:00 +03:00
RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Обновлено 2021-03-18 12:01:39 +03:00
Обновлено 2020-10-27 15:49:42 +03:00