Gemma3-27B-FT-Q8
This repository contains a fine-tuned version of the Gemma-3-27B model, quantized to 8-bit precision.
Installation
To use this model, you need to install the transformers and torch libraries.
pip install transformers torch
Usage
You can use the model for text generation. Here is an example of how to load the model and generate text:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("snxtyle/gemma3-27b-ft-q8")
model = AutoModelForCausalLM.from_pretrained("snxtyle/gemma3-27b-ft-q8")
prompt = "Where is DPIP failing?'"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate text
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
This is a basic example. You can find more information about the generate method and its parameters in the Hugging Face documentation.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Copyright (c) 2025 snxtyle
- Downloads last month
- -
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support