https://github.com/huggingface/transformers

https://camo.githubusercontent.com/19694a747faa4c55cbdb1cab99086099c6cf961930712f87ab3469e9bf706a4f/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f68756767696e67666163652f646f63756d656e746174696f6e2d696d616765732f7261772f6d61696e2f7472616e73666f726d6572732d6c6f676f2d6c696768742e737667

https://camo.githubusercontent.com/7e0d9e9f10088f3210281bc600989392d6784232aac30c653e2fabbc5bd7a2f0/68747470733a2f2f696d672e736869656c64732e696f2f636972636c6563692f6275696c642f6769746875622f68756767696e67666163652f7472616e73666f726d6572732f6d61696e

https://camo.githubusercontent.com/d0afd99731b850439b62e2551826aa4c3b32369f8c15c8f0b02ad2277479c242/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f68756767696e67666163652f7472616e73666f726d6572732e7376673f636f6c6f723d626c7565

https://camo.githubusercontent.com/6f3767a6d933301807cdfd2c597a101a8a446815060455f9421336bd2217696f/68747470733a2f2f696d672e736869656c64732e696f2f776562736974652f687474702f68756767696e67666163652e636f2f646f63732f7472616e73666f726d6572732f696e6465782e7376673f646f776e5f636f6c6f723d72656426646f776e5f6d6573736167653d6f66666c696e652675705f6d6573736167653d6f6e6c696e65

https://camo.githubusercontent.com/382e88f4824cc860ca2461982aaa461a7e6546fe6e9e929043511b80f3cf0662/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f72656c656173652f68756767696e67666163652f7472616e73666f726d6572732e737667

https://camo.githubusercontent.com/20fe195dfecc3508e105ec04e6a1acea97bd409201ac6ca09c07942c8a8e4ad2/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d76322e3025323061646f707465642d6666363962342e737667

https://camo.githubusercontent.com/517af209a5e04322ac7dd3e2b7091245cea97f513683288ee23956536bdc5b8f/68747470733a2f2f7a656e6f646f2e6f72672f62616467652f3135353232303634312e737667

English | 简体中文 | 繁體中文 | 한국어 | Español | 日本語 | हिन्दी | Русский | Рortuguês | తెలుగు | Français | Deutsch | Tiếng Việt |

State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

https://camo.githubusercontent.com/ad749d27e199f2237320a760fca37d98b14f72b9e3c855c1a50b733fd3ba4a87/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f68756767696e67666163652f646f63756d656e746174696f6e2d696d616765732f7265736f6c76652f6d61696e2f636f757273655f62616e6e65722e706e67

🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.

These models can be applied on:

Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.

🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.