Nina-Dolphin

Nina-Dolphin is a merged multilingual causal language model capable of instruction following, reasoning, and chat. It merges multiple base models:

  • google/gemma-2-27b-it
  • mistralai/Mixtral-8x22B-Instruct-v0.1
  • meta-llama/Llama-3.1-70B-Instruct
  • Qwen/Qwen2-72B-Instruct

Supported languages: English, Spanish, French, Japanese, Chinese, Italian, Russian

Training datasets include: OpenHermes-2.5, UltraChat-200k, Open-Platypus, MetaMathQA, Wikipedia (multiple languages), and OSCAR-2201.

Usage Example

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Abigail45/Nina-Dolphin")
model = AutoModelForCausalLM.from_pretrained(
    "Abigail45/Nina-Dolphin",
    device_map="auto",
    load_in_4bit=True
)

prompt = "Summarize the causes of the French Revolution."
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(
    **inputs,
    max_new_tokens=200,
    do_sample=True,
    temperature=0.7,
    top_p=0.9
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abigail45/Nina-Dolphin

Datasets used to train Abigail45/Nina-Dolphin