The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
Paper
โข
2308.16884
โข
Published
โข
10
query
stringclasses 5
values | sol1
stringclasses 5
values | sol2
stringclasses 5
values | sol3
stringclasses 5
values | sol4
stringclasses 5
values | label
stringclasses 3
values |
|---|---|---|---|---|---|
ูุงู ุชุนุงูู ( ูููููุฏูุนู ููุงุฏููููู (17) ุณูููุฏูุน ุงูุฏูุจูุงููููุฉู (18) ) ู
ุนูู ููู
ุฉ ุงูุฒูุจูุงูููุฉู ูู
|
ู
ูุงุฆูุฉ ุงูุฌุจุงู
|
ู
ูุงุฆูุฉ ุงูุณุญุงุจ
|
ุฎุฒูุฉ ุฌููู
|
ุญู
ูุฉ ุงูุนุฑุด
|
2
|
ูุงู ุงูููุจููู ุตูููู ุงูููููู ุนููููููู ููุณูููู
ู ูุงู" ุฎูููุฑูููู
ู ู
ููู ุชูุนูููู
ู ุงููููุฑูุขูู " ุฃุญุฏ ุงูุฃู
ูุฑ ุงูุขุชูุฉ ูุฏู ุนูู ูุถู ุชุนูู
ุงููุฑุขู ุงููุฑูู
ูู ุฃู ูู ุจูู ุญุฑู
|
ุฎู
ุณ ุนุดุฑุฉ ุญุณูุฉ
|
ุนุดุฑ ุญุณูุงุช
|
ุญูุฉ
|
ุฎู
ุณ ุญุณูุงุช
|
1
|
ุฎูู ุฌู
ูู ูุฏุนู ุตุงุญุจู ุฅูู ูุนู ุงูุฌู
ูู ูุชุฑู ุงููุจูุญ ูู
|
ุงูุญูุงุก
|
ุงูุฃู
ุงูุฉ
|
ุงูุชูุงุถุน
|
ุงูุตุฏู
|
1
|
ุงูู
ูู ุงูุฐู ููุฒู ุจุงููุญู ู
ู ุงููู ุชุนุงูู ุนูู ุฃูุจูุงุฆู ูู
|
ุงุณุฑุงููู
|
ู
ุงูู
|
ู
ููุงุฆูู
|
ุฌุจุฑูู
|
3
|
ู
ู ููุงูุถ ุงููุถูุก
|
ุงูุนุฑู ูุงูุฌูุฏ
|
ุงูุฎุงุฑุฌ ู
ู ุงูุณุจูููู
|
ุงุตุงุจุฉ ุงูู
ูุงุจุณ ุจุงููุฌุงุณุฉ
|
ุดุฑุจ ุงูู
ุงุก
|
1
|
Multiple-choice evaluation benchmark for zero- and few-shot evaluation of Arabic LLMs, we adapt the following tasks:
@inproceedings{almazrouei-etal-2023-alghafa,
title = "{A}l{G}hafa Evaluation Benchmark for {A}rabic Language Models",
author = "Almazrouei, Ebtesam and
Cojocaru, Ruxandra and
Baldo, Michele and
Malartic, Quentin and
Alobeidli, Hamza and
Mazzotta, Daniele and
Penedo, Guilherme and
Campesan, Giulia and
Farooq, Mugariya and
Alhammadi, Maitha and
Launay, Julien and
Noune, Badreddine",
editor = "Sawaf, Hassan and
El-Beltagy, Samhaa and
Zaghouani, Wajdi and
Magdy, Walid and
Abdelali, Ahmed and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Habash, Nizar and
Khalifa, Salam and
Keleg, Amr and
Haddad, Hatem and
Zitouni, Imed and
Mrini, Khalil and
Almatham, Rawan",
booktitle = "Proceedings of ArabicNLP 2023",
month = dec,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.arabicnlp-1.21",
doi = "10.18653/v1/2023.arabicnlp-1.21",
pages = "244--275",
abstract = "Recent advances in the space of Arabic large language models have opened up a wealth of potential practical applications. From optimal training strategies, large scale data acquisition and continuously increasing NLP resources, the Arabic LLM landscape has improved in a very short span of time, despite being plagued by training data scarcity and limited evaluation resources compared to English. In line with contributing towards this ever-growing field, we introduce AlGhafa, a new multiple-choice evaluation benchmark for Arabic LLMs. For showcasing purposes, we train a new suite of models, including a 14 billion parameter model, the largest monolingual Arabic decoder-only model to date. We use a collection of publicly available datasets, as well as a newly introduced HandMade dataset consisting of 8 billion tokens. Finally, we explore the quantitative and qualitative toxicity of several Arabic models, comparing our models to existing public Arabic LLMs.",
}