Instructions to use pmczip/Z-Image-Turbo_Models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use pmczip/Z-Image-Turbo_Models with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Tongyi-MAI/Z-Image-Turbo", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("pmczip/Z-Image-Turbo_Models") prompt = "413xkin65t0n, auburn hair, blue eyes, denim jacket, black turtleneck sweater, jeans, lips, outdoors, on a ranch, leaning on a wooden post, morning, bright, " image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Founding Engineer Role: Access to AIRAWAT Supercomputer (Zulense)
Hi Pmczip,
I’ve been following your recent work with FLUX and SDXL on Hugging Face (specifically the ZImage model). Your implementation shows a deep understanding of the latest diffusion architectures.
I am the founder of Zulense, an AI startup building generative video models specifically for education. We have secured access to the AIRAWAT Supercomputer (Govt of India's AI Cluster) to train large-scale video models from scratch.
I am looking for a Founding AI Engineer to lead our model training.
The Mission: Solve temporal consistency in educational videos (Math/Science).
The Tools: You will have full access to our H100/A100 clusters to train models most people can only read about.
The Role: You will own the architecture (PyTorch/Diffusers) and the training loop.
We are offering a competitive package (₹22LPA + Founding Equity) and, more importantly, the compute resources to build world-class models.
Are you open to a quick 10-minute chat this week?
Best, Manish Founder, Zulense