ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning

ShadowPEFT is a parameter-efficient fine-tuning (PEFT) framework that augments a frozen large base model with a lightweight, centralized, and detachable Shadow network. Unlike standard LoRA-style adapters, ShadowPEFT performs layer-level refinement through a depth-shared shadow module. This design allows the shadow module to be architecturally decoupled from the backbone, enabling it to be trained and deployed as a standalone component, which is particularly beneficial for edge computing and robotics.

Sample Usage

To use this shadow model, first install the shadow-peft library:

pip install shadow-peft

Then, you can load the pre-trained projected shadow model using the following code:

from shadow_peft import AutoModelForCausalLMWithHiddenProjection

# Load the pre-trained projected shadow model
shadow_model = AutoModelForCausalLMWithHiddenProjection.from_pretrained(
    "shadow-llm/Qwen3-0.6B-H8B",
    freeze_backbone=False,      # keep backbone trainable (default)
    freeze_embed_tokens=True,   # freeze input embeddings (default)
    freeze_lm_head=True,        # freeze lm_head (default)
)

Citation

@article{li2026shadowpeft,
      title={ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning}, 
      author={Xianming Li and Zongxi Li and Tsz-fung Andrew Lee and Jing Li and Haoran Xie and Qing Li},
      year={2026},
      eprint={2604.19254},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.19254}, 
}
Downloads last month
198
Safetensors
Model size
1B params
Tensor type
F16
·
Video Preview
loading

Collection including shadow-llm/robot-dog-detached-shadow

Paper for shadow-llm/robot-dog-detached-shadow