Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing

Vision transformers (ViTs) have significantly changed the computer vision landscape and have periodically exhibited superior performance in vision tasks compared to convolutional neural networks (CNNs). Although the jury is still out on which model type is superior, each has unique inductive biases that shape their learning and generalization performance. For example, ViTs have interesting properties with respect to early layer non-local feature dependence, as well as self-attention mechanisms which enhance learning flexibility, enabling them to ignore out-of-context image information more effectively. We hypothesize that this power to ignore out-of-context information (which we name $\textit{patch selectivity}$), while integrating in-context information in a non-local manner in early layers, allows ViTs to more easily handle occlusion. In this study, our aim is to see whether we can have CNNs $\textit{simulate}$ this ability of patch selectivity by effectively hardwiring this inductive bias using Patch Mixing data augmentation, which consists of inserting patches from another image onto a training image and interpolating labels between the two image classes. Specifically, we use Patch Mixing to train state-of-the-art ViTs and CNNs, assessing its impact on their ability to ignore out-of-context patches and handle natural occlusions. We find that ViTs do not improve nor degrade when trained using Patch Mixing, but CNNs acquire new capabilities to ignore out-of-context information and improve on occlusion benchmarks, leaving us to conclude that this training method is a way of simulating in CNNs the abilities that ViTs already possess. We will release our Patch Mixing implementation and proposed datasets for public use. Project page: https://arielnlee.github.io/PatchMixing/

KingsmanVince,
KingsmanVince avatar
nsa,

Please don't post links to reddit.

KingsmanVince,
KingsmanVince avatar

I know we are moving away from Reddit. However, if I don't link, I feel like we may miss out good threads on r/machinelearning. Moreover, the authors don't only post arxiv links, they post other sutff such as Summary, Key points, ... (e.g this).

So can I at least put them in the posts instead of posting in a comment?

Lenguador,
Lenguador avatar

I find the link valuable. Despite the proliferation of AI in pop culture, actual discussion of machine learning research is still niche. The community on Reddit is quite valuable and took a long time to form.

nsa,

If there isn't any discussion on reddit (no discussion in this case), I don't see a reason to link to reddit; you can just link to the project page. That said, if you think there is important discussion happening that is helpful for understanding the paper, then use a teddit link instead, like:

https://teddit.net/r/MachineLearning/comments/14pq5mq/r_hardwiring_vit_patch_selectivity_into_cnns/

KingsmanVince,
KingsmanVince avatar

I will follow then.

nsa,

That's appreciated!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • machinelearning
  • Leos
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • everett
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • modclub
  • kavyap
  • GTA5RPClips
  • cubers
  • JUstTest
  • osvaldo12
  • tacticalgear
  • Durango
  • khanakhh
  • ngwrru68w68
  • provamag3
  • anitta
  • tester
  • cisconetworking
  • ethstaker
  • megavids
  • normalnudes
  • lostlight
  • All magazines