You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(1) Collect some famous opensource projects of AI: adding them as submodules of git into 'resources' folder
(2) Try to refactor some projects or add some comments or do some experiments for learning them deeply: adding my codes in 'content' folder
Records:
2020-3: Refactoring and more comments on ELECTRA to make it more readable, on 'content/Easy-ELECTRA' ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
About
Collect, learn and try to refactor some famous AI projects, including ml, dl, nlp, cv ...