| --- |
| library_name: transformers |
| tags: |
| - code |
| - ReactJS |
| language: |
| - en |
| base_model: |
| - Qwen/Qwen3-1.7B-Base |
| base_model_relation: finetune |
| pipeline_tag: text-generation |
| --- |
| |
| # Model Information |
| The Qwen3-ReactJs-code is a quantized, fine-tuned version of the Qwen3-1.7B-Base model designed specifically for generating ReactJs code. |
|
|
| - **Base model:** Qwen/Qwen3-1.7B-Base |
|
|
|
|
| # How to use |
| Starting with transformers version 4.51.0 and later, you can run conversational inference using the Transformers pipeline. |
|
|
| Make sure to update your transformers installation via pip install --upgrade transformers. |
|
|
| ```python |
| import torch |
| from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
| ``` |
|
|
| ```python |
| def get_pipline(): |
| model_name = "nirusanan/Qwen3-ReactJs-code" |
| |
| tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
| tokenizer.pad_token = tokenizer.eos_token |
| |
| model = AutoModelForCausalLM.from_pretrained( |
| model_name, |
| torch_dtype=torch.float16, |
| device_map="cuda:0", |
| trust_remote_code=True |
| ) |
| |
| pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=3500) |
| |
| return pipe |
| |
| pipe = get_pipline() |
| ``` |
|
|
| ```python |
| def generate_prompt(project_title, description): |
| prompt = f"""Below is an instruction that describes a project. Write Reactjs code to accomplish the project described below. |
| |
| ### Instruction: |
| Project: |
| {project_title} |
| |
| Project Description: |
| {description} |
| |
| ### Response: |
| """ |
| return prompt |
| ``` |
|
|
|
|
| ```python |
| prompt = generate_prompt(project_title = "Your ReactJs project", description = "Your ReactJs project description") |
| result = pipe(prompt) |
| generated_text = result[0]['generated_text'] |
| print(generated_text.split("### End")[0]) |
| ``` |