shadow-peft-models
Collection
pretrained weights and data for the ShadowPEFT paper • 30 items • Updated • 3
ShadowPEFT is a parameter-efficient fine-tuning (PEFT) framework that augments a frozen large base model with a lightweight, centralized, and detachable Shadow network. Unlike standard LoRA-style adapters, ShadowPEFT performs layer-level refinement through a depth-shared shadow module. This design allows the shadow module to be architecturally decoupled from the backbone, enabling it to be trained and deployed as a standalone component, which is particularly beneficial for edge computing and robotics.
To use this shadow model, first install the shadow-peft library:
pip install shadow-peft
Then, you can load the pre-trained projected shadow model using the following code:
from shadow_peft import AutoModelForCausalLMWithHiddenProjection
# Load the pre-trained projected shadow model
shadow_model = AutoModelForCausalLMWithHiddenProjection.from_pretrained(
"shadow-llm/Qwen3-0.6B-H8B",
freeze_backbone=False, # keep backbone trainable (default)
freeze_embed_tokens=True, # freeze input embeddings (default)
freeze_lm_head=True, # freeze lm_head (default)
)
@article{li2026shadowpeft,
title={ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning},
author={Xianming Li and Zongxi Li and Tsz-fung Andrew Lee and Jing Li and Haoran Xie and Qing Li},
year={2026},
eprint={2604.19254},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.19254},
}