Papers - ResNet
updated
Paper
• 1605.07146
• Published
• 2
Characterizing signal propagation to close the performance gap in
unnormalized ResNets
Paper
• 2101.08692
• Published
• 2
Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Paper
• 2105.03536
• Published
• 3
When Vision Transformers Outperform ResNets without Pre-training or
Strong Data Augmentations
Paper
• 2106.01548
• Published
• 2
ResNet strikes back: An improved training procedure in timm
Paper
• 2110.00476
• Published
• 6
A ResNet is All You Need? Modeling A Strong Baseline for Detecting
Referable Diabetic Retinopathy in Fundus Images
Paper
• 2210.03180
• Published
Deep Residual Learning for Image Recognition
Paper
• 1512.03385
• Published
• 12
Revisiting ResNets: Improved Training and Scaling Strategies
Paper
• 2103.07579
• Published
• 2
Densely Connected Convolutional Networks
Paper
• 1608.06993
• Published
• 3
Aggregated Residual Transformations for Deep Neural Networks
Paper
• 1611.05431
• Published
• 2
RTSeg: Real-time Semantic Segmentation Comparative Study
Paper
• 1803.02758
• Published
• 2
Latent Diffusion Model for Medical Image Standardization and Enhancement
Paper
• 2310.05237
• Published
• 2
3D Medical Image Segmentation based on multi-scale MPU-Net
Paper
• 2307.05799
• Published
• 2
Joint Liver and Hepatic Lesion Segmentation in MRI using a Hybrid CNN
with Transformer Layers
Paper
• 2201.10981
• Published
• 2
Bootstrap your own latent: A new approach to self-supervised Learning
Paper
• 2006.07733
• Published
• 2
From Modern CNNs to Vision Transformers: Assessing the Performance,
Robustness, and Classification Strategies of Deep Learning Models in
Histopathology
Paper
• 2204.05044
• Published
• 2
Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology
Paper
• 2203.00585
• Published
• 3
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Paper
• 1905.11946
• Published
• 3
DAS: A Deformable Attention to Capture Salient Information in CNNs
Paper
• 2311.12091
• Published
• 2
Semi-Supervised Semantic Segmentation using Redesigned Self-Training for
White Blood Cells
Paper
• 2401.07278
• Published
• 2
Adding Conditional Control to Text-to-Image Diffusion Models
Paper
• 2302.05543
• Published
• 58
Data Distributional Properties Drive Emergent In-Context Learning in
Transformers
Paper
• 2205.05055
• Published
• 2
CascadeTabNet: An approach for end to end table detection and structure
recognition from image-based documents
Paper
• 2004.12629
• Published
• 3
Realism in Action: Anomaly-Aware Diagnosis of Brain Tumors from Medical
Images Using YOLOv8 and DeiT
Paper
• 2401.03302
• Published
• 1
Detecting and recognizing characters in Greek papyri with YOLOv8, DeiT
and SimCLR
Paper
• 2401.12513
• Published
• 1
DeiT-LT Distillation Strikes Back for Vision Transformer Training on
Long-Tailed Datasets
Paper
• 2404.02900
• Published
• 1
DeiT III: Revenge of the ViT
Paper
• 2204.07118
• Published
• 1
Transferable and Principled Efficiency for Open-Vocabulary Segmentation
Paper
• 2404.07448
• Published
• 12
ConsistencyDet: Robust Object Detector with Denoising Paradigm of
Consistency Model
Paper
• 2404.07773
• Published
• 1
Long-form music generation with latent diffusion
Paper
• 2404.10301
• Published
• 27
GLIGEN: Open-Set Grounded Text-to-Image Generation
Paper
• 2301.07093
• Published
• 4
A Multimodal Automated Interpretability Agent
Paper
• 2404.14394
• Published
• 23
What needs to go right for an induction head? A mechanistic study of
in-context learning circuits and their formation
Paper
• 2404.07129
• Published
• 3
Multiplication-Free Transformer Training via Piecewise Affine Operations
Paper
• 2305.17190
• Published
• 2
Large Scale GAN Training for High Fidelity Natural Image Synthesis
Paper
• 1809.11096
• Published
• 1
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
Paper
• 1707.02968
• Published
• 1
Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept
Space
Paper
• 2406.19370
• Published
• 1
Paper
• 2303.14027
• Published
• 1
Equivariant Transformer Networks
Paper
• 1901.11399
• Published
• 1
Fixup Initialization: Residual Learning Without Normalization
Paper
• 1901.09321
• Published
• 1
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and
Out Distribution Robustness
Paper
• 2206.14502
• Published
• 1
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Paper
• 2203.03897
• Published
• 1
RT-DETRv2: Improved Baseline with Bag-of-Freebies for Real-Time
Detection Transformer
Paper
• 2407.17140
• Published
• 2
DETRs Beat YOLOs on Real-time Object Detection
Paper
• 2304.08069
• Published
• 15
No More Adam: Learning Rate Scaling at Initialization is All You Need
Paper
• 2412.11768
• Published
• 43
Matryoshka Representation Learning
Paper
• 2205.13147
• Published
• 25