Share This Author
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model’s learning status, is proposed and applied to FixMatch, which achieves state-of-the-art performance on a variety of SSL benchmarks.
TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions
TORQUE is introduced, a new English reading comprehension benchmark built on 3.2k news snippets with 21k human-generated questions querying temporal relationships, and results show that RoBERTa-large achieves an exact-match score of 51% on the test set of TORQUE, about 30% behind human performance.
FPGA Acceleration of LSTM Based on Data for Test Flight
- Z. Sun, Yongxin Zhu, Zhiqiang Que
- Computer ScienceIEEE International Conference on Smart Cloud…
- 1 September 2018
This work proposes an FPGA-based LSTM-RNN accelerator to optimize the accuracy and speed of existing models of aircraft anomaly detection, and achieves the optimization in the computation speed without sacrificing the accuracy.
Towards a general theory of access
This paper integrates and extends many of the concepts of accessibility deriving from Hansen’s (1959) seminal paper, and develops a theory of access that generalizes from the particular measures of…
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model
This work proposes CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model, to deal with various code intelligence tasks and introduces two novel pretraining objectives, to predict the edges between nodes in the abstract syntax tree.
Nitrogen‐Doped Graphene Ribbon Assembled Core–Sheath MnO@Graphene Scrolls as Hierarchically Ordered 3D Porous Electrodes for Fast and Durable Lithium Storage
Graphene scroll is an emerging 1D tubular form of graphitic carbon that has potential applications in electrochemical energy storage. However, it still remains a challenge to composite graphene…
Predicting Hard Rock Pillar Stability Using GBDT, XGBoost, and LightGBM Algorithms
The proposed methodology can provide a reliable reference for pillar design and stability risk management and achieve a better comprehensive performance than other ML algorithms.
ZeroVL: A Strong Baseline for Aligning Vision-Language Representations with Limited Resources
A reproducible strong baseline of competitive results is provided, namely ZeroVL, with only 14M publicly accessible academic datasets and 8 V100 GPUs, which allows for dual-encoder multi-modal representation alignment with limited resources to be conducted.
Statistical insights into deep neural network learning in subspace classification
The results provide an important complement to the common belief of representational learning, suggesting that at least in some model settings, although the performance of DNN is comparable with that of the ideal two‐step procedure knowing the true latent cluster information a priori, it does not really do clustering in any of its layers.