Jove
Visualize
Contact Us
JoVE
x logofacebook logolinkedin logoyoutube logo
ABOUT JoVE
OverviewLeadershipBlogJoVE Help Center
AUTHORS
Publishing ProcessEditorial BoardScope & PoliciesPeer ReviewFAQSubmit
LIBRARIANS
TestimonialsSubscriptionsAccessResourcesLibrary Advisory BoardFAQ
RESEARCH
JoVE JournalMethods CollectionsJoVE Encyclopedia of ExperimentsArchive
EDUCATION
JoVE CoreJoVE BusinessJoVE Science EducationJoVE Lab ManualFaculty Resource CenterFaculty Site
Terms & Conditions of Use
Privacy Policy
Policies

Related Concept Videos

Transformers with Off-Nominal Turns Ratios01:25

Transformers with Off-Nominal Turns Ratios

205
In scenarios involving parallel transformers with disparate ratings, developing per-unit models requires accommodating off-nominal turns ratios. This situation arises when the selected base voltages are not proportional to the transformer’s voltage ratings. Consider a transformer where the rated voltages are related by the term a. If the chosen voltage bases satisfy a relationship involving term b, term c is defined as the ratio of these bases. This ratio is then substituted into the...
205
Vision01:24

Vision

55.3K
Vision is the result of light being detected and transduced into neural signals by the retina of the eye. This information is then further analyzed and interpreted by the brain. First, light enters the front of the eye and is focused by the cornea and lens onto the retina—a thin sheet of neural tissue lining the back of the eye. Because of refraction through the convex lens of the eye, images are projected onto the retina upside-down and reversed.
55.3K
Transformers in Distribution System01:27

Transformers in Distribution System

156
Transformers in distribution systems can be broadly categorized into distribution substation transformers and other distribution transformers. They are crucial for stepping down high transmission voltages to levels suitable for distribution and end-user applications.
Distribution substation transformers come in various ratings and typically use mineral oil for insulation and cooling. To prevent moisture and air from entering the oil, some transformers use an inert gas like nitrogen to fill the...
156
Improving Translational Accuracy02:07

Improving Translational Accuracy

11.8K
Base complementarity between the three base pairs of mRNA codon and the tRNA anticodon is not a failsafe mechanism. Inaccuracies can range from a single mismatch to no correct base pairing at all. The free energy difference between the correct and nearly correct base pairs can be as small as 3 kcal/ mol. With complementarity being the only proofreading step, the estimated error frequency would be one wrong amino acid in every 100 amino acids incorporated. However, error frequencies observed in...
11.8K
Reducing Line Loss01:18

Reducing Line Loss

193
In a three-phase circuit, line loss is an indicator of energy dissipated as heat due to the resistance of transmission lines. To address this, incorporating transformers into the system—a step-up transformer at the source and a step-down transformer at the load—is a strategic solution. Two three-phase transformers are introduced to improve this.
With a step-up transformer at the source, the voltage is increased, thereby reducing the current in the transmission lines since power loss...
193
The Ideal Transformer01:26

The Ideal Transformer

878
In single-phase two-winding transformers, two windings are coiled around a magnetic core characterized by cross-sectional area A and magnetic permeability μ. A phasor current i1 enters the left winding while i2 exits the right winding, establishing the fundamental working of the transformer through electromagnetic principles.
Ampere's Law forms the basis of understanding the magnetic field within the transformer. It states that the integral of the magnetic field intensity's...
878
  1. Home
  2. Research Domains
  3. Engineering
  4. Communications Engineering
  5. Signal Processing
  6. Prance: Joint Token-optimization And Structural Channel-pruning For Adaptive Vit Inference

PRANCE: Joint Token-Optimization and Structural Channel-Pruning for Adaptive ViT Inference

Ye Li, Chen Tang, Yuan Meng

    IEEE Transactions on Pattern Analysis and Machine Intelligence
    |September 2, 2025

    Related Experiment Videos

    A Swin Transformer-Based Model for Thyroid Nodule Detection in Ultrasound Images
    04:23

    A Swin Transformer-Based Model for Thyroid Nodule Detection in Ultrasound Images

    Published on: April 21, 2023

    2.0K
    Swin-PSAxialNet: An Efficient Multi-Organ Segmentation Technique
    04:48

    Swin-PSAxialNet: An Efficient Multi-Organ Segmentation Technique

    Published on: July 5, 2024

    491
    Author Spotlight: Insights into Visual Cortex Research Through Wide-View fMRI Mapping
    07:11

    Author Spotlight: Insights into Visual Cortex Research Through Wide-View fMRI Mapping

    Published on: December 8, 2023

    1.9K

    View abstract on PubMed

    Summary
    This summary is machine-generated.

    PRANCE accelerates Vision Transformers (ViTs) by jointly optimizing channels and tokens per sample. This framework reduces computational complexity and model size without sacrificing accuracy, enabling efficient ViT deployment.

    Area of Science:

    • Computer Vision
    • Artificial Intelligence
    • Machine Learning

    Background:

    • Vision Transformers (ViTs) face deployment challenges due to large model size and quadratic complexity with token count.
    • Existing methods for accelerating ViTs, like pruning and token reduction, often use fixed ratios and neglect joint optimization, leading to accuracy loss.

    Purpose of the Study:

    • To introduce PRANCE, a novel framework for jointly optimizing activated channels and tokens on a per-sample basis to accelerate ViT inference.
    • To address the challenges of dynamic channel computation and the vast decision space in joint optimization.

    Main Methods:

    • Developed a meta-network with weight-sharing for dynamic channel support in Multi-Head Self-Attention (MHSA) and Multi-Layer Perceptron (MLP) layers.
    • Employed Proximal Policy Optimization (PPO) via a lightweight selector to efficiently manage the combinatorial optimization problem.

    Related Experiment Videos

    A Swin Transformer-Based Model for Thyroid Nodule Detection in Ultrasound Images
    04:23

    A Swin Transformer-Based Model for Thyroid Nodule Detection in Ultrasound Images

    Published on: April 21, 2023

    2.0K
    Swin-PSAxialNet: An Efficient Multi-Organ Segmentation Technique
    04:48

    Swin-PSAxialNet: An Efficient Multi-Organ Segmentation Technique

    Published on: July 5, 2024

    491
    Author Spotlight: Insights into Visual Cortex Research Through Wide-View fMRI Mapping
    07:11

    Author Spotlight: Insights into Visual Cortex Research Through Wide-View fMRI Mapping

    Published on: December 8, 2023

    1.9K
  • Introduced a 'Result-to-Go' training mechanism modeling ViT inference as a Markov decision process to reduce action space and reward delay.
  • Main Results:

    • Achieved approximately 50% reduction in FLOPs (Floating Point Operations).
    • Retained only about 10% of input tokens.
    • Maintained lossless Top-1 accuracy, demonstrating significant efficiency gains.

    Conclusions:

    • PRANCE offers a unified approach to accelerate ViTs by simultaneously optimizing architecture and data.
    • The framework effectively tackles the trade-off between compression and accuracy, enabling efficient deployment of ViTs.