Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

LiconStudio
/
VBVR-wan2.2-comfy-bf16

Diffusers
Wan2.2
i2v
fp8
comfyui
video-generation
surgical-quant
Model card Files Files and versions
xet
Community
3

Instructions to use LiconStudio/VBVR-wan2.2-comfy-bf16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use LiconStudio/VBVR-wan2.2-comfy-bf16 with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("LiconStudio/VBVR-wan2.2-comfy-bf16", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Wan2.2

    How to use LiconStudio/VBVR-wan2.2-comfy-bf16 with Wan2.2:

    # No code snippets available yet for this library.
    
    # To use this model, check the repository files and the library's documentation.
    
    # Want to help? PRs adding snippets are welcome at:
    # https://github.com/huggingface/huggingface.js
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Inference settings for VBVR-wan2.2-comfy-bf16.safetensors

1
#3 opened 24 days ago by
Heouzen

is the “SNR-Calibrated-Hybrid”I in the document an HiFi FP8 model, and does it need to be used with lightx2Vlora for acceleration?

2
#2 opened about 1 month ago by
RedHn

Can this model be quantized to FP8 scale?

8
#1 opened 2 months ago by
Nuke1229
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs