url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 2.19B | node_id stringlengths 18 24 | number int64 2 6.73k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments sequencelengths 0 30 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 3
values | active_lock_reason null | draft null | pull_request null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6734/comments | https://api.github.com/repos/huggingface/datasets/issues/6734/events | https://github.com/huggingface/datasets/issues/6734 | 2,187,646,694 | I_kwDODunzps6CZNbm | 6,734 | Tokenization slows towards end of dataset | {
"login": "ethansmith2000",
"id": 98723285,
"node_id": "U_kgDOBeJl1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethansmith2000",
"html_url": "https://github.com/ethansmith2000",
"followers_url": "https://api.github.com... | [] | open | false | null | [] | null | [
"Hi ! First note that if the dataset is not heterogeneous / shuffled, there might be places in the data with shorter texts that are faster to tokenize.\r\n\r\nMoreover, the way `num_proc` works is by slicing the dataset and passing each slice to a process to run the `map()` function. So at the very end of `map()`, ... | 2024-03-15T03:27:36 | 2024-03-15T15:27:59 | null | NONE | null | null | null | ### Describe the bug
Mapped tokenization slows down substantially towards end of dataset.
train set started off very slow, caught up to 20k then tapered off til the end.
what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted down... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6734/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6733/comments | https://api.github.com/repos/huggingface/datasets/issues/6733/events | https://github.com/huggingface/datasets/issues/6733 | 2,186,811,724 | I_kwDODunzps6CWBlM | 6,733 | EmptyDatasetError when loading dataset downloaded with HuggingFace cli | {
"login": "StwayneXG",
"id": 77196999,
"node_id": "MDQ6VXNlcjc3MTk2OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/77196999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StwayneXG",
"html_url": "https://github.com/StwayneXG",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"Hi! `datasets` is not compatible with `huggingface_hub`'s cache structure, hence the error.\r\n\r\nYou can track https://github.com/huggingface/datasets/issues/5080 to get notified when this is implemented."
] | 2024-03-14T16:41:27 | 2024-03-15T18:09:02 | null | NONE | null | null | null | ### Describe the bug
I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error:
```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6733/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6731/comments | https://api.github.com/repos/huggingface/datasets/issues/6731/events | https://github.com/huggingface/datasets/issues/6731 | 2,182,844,673 | I_kwDODunzps6CG5EB | 6,731 | Unexpected behavior when using load_dataset with streaming=True in a for loop | {
"login": "uApiv",
"id": 42908296,
"node_id": "MDQ6VXNlcjQyOTA4Mjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uApiv",
"html_url": "https://github.com/uApiv",
"followers_url": "https://api.github.com/users/uApiv/follow... | [] | open | false | null | [] | null | [
"This is normal behavior in python when using `lambda`: the `i` defined in your `lambda` refers to the global variable `i` in your loop, and `i` equals to `1` when you run your `for e in res[0]` line.\r\n\r\nYou should pass `fn_kwargs` that will be passed to your `lambda` instead of using the global variable:\r\n\r... | 2024-03-12T23:26:43 | 2024-03-14T15:27:02 | null | NONE | null | null | null | ### Describe the bug
### My Code
```
from datasets import load_dataset
res=[]
for i in [0,1]:
di=load_dataset(
"json",
data_files='path_to.json',
split='train',
streaming=True,
).map(lambda x: {"source": i})
res.append(di)
for e in res[... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6731/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6729/comments | https://api.github.com/repos/huggingface/datasets/issues/6729/events | https://github.com/huggingface/datasets/issues/6729 | 2,180,237,159 | I_kwDODunzps6B88dn | 6,729 | Support zipfiles that span multiple disks? | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/foll... | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [] | 2024-03-11T21:07:41 | 2024-03-11T21:07:46 | null | CONTRIBUTOR | null | null | null | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
F... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6729/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6728/comments | https://api.github.com/repos/huggingface/datasets/issues/6728/events | https://github.com/huggingface/datasets/issues/6728 | 2,178,607,012 | I_kwDODunzps6B2uek | 6,728 | Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT` | {
"login": "padeoe",
"id": 10057041,
"node_id": "MDQ6VXNlcjEwMDU3MDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/10057041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padeoe",
"html_url": "https://github.com/padeoe",
"followers_url": "https://api.github.com/users/padeoe/fo... | [] | closed | false | null | [] | null | [
"Through debugging, I found a potential solution is to modify the code in the error handling module of `huggingface_hub`: https://github.com/huggingface/huggingface_hub/commit/56d6c798c44e83d2a3167e74c022737d8fcbe822 ",
"@Wauplin ",
"Thanks for investigating and reporting the bug @padeoe! I've opened a PR in `h... | 2024-03-11T09:06:38 | 2024-03-15T14:52:07 | 2024-03-15T14:52:07 | NONE | null | null | null | ### Describe the bug
This bug is triggered under the following conditions:
- datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`.
- If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`.
- T... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6728/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6726/comments | https://api.github.com/repos/huggingface/datasets/issues/6726/events | https://github.com/huggingface/datasets/issues/6726 | 2,177,097,232 | I_kwDODunzps6Bw94Q | 6,726 | Profiling for HF Filesystem shows there are easy performance gains to be made | {
"login": "awgr",
"id": 159512661,
"node_id": "U_kgDOCYH4VQ",
"avatar_url": "https://avatars.githubusercontent.com/u/159512661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awgr",
"html_url": "https://github.com/awgr",
"followers_url": "https://api.github.com/users/awgr/followers",
"f... | [] | open | false | null | [] | null | [
"FWIW I debugged this while waiting for it to go",
"Oh I forgot to mention you can also cache resolve_pattern, and that seemed to also substantially improves things, if you want to load a dataset twice for whatever reason."
] | 2024-03-09T07:08:45 | 2024-03-09T07:11:08 | null | NONE | null | null | null | ### Describe the bug
# Let's make it faster
First, an evidence...

Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6726/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6725/comments | https://api.github.com/repos/huggingface/datasets/issues/6725/events | https://github.com/huggingface/datasets/issues/6725 | 2,175,527,530 | I_kwDODunzps6Bq-pq | 6,725 | Request for a comparison of huggingface datasets compared with other data format especially webdataset | {
"login": "Luciennnnnnn",
"id": 20135317,
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luciennnnnnn",
"html_url": "https://github.com/Luciennnnnnn",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-03-08T08:23:01 | 2024-03-08T08:23:01 | null | NONE | null | null | null | ### Feature request
Request for a comparison of huggingface datasets compared with other data format especially webdataset
### Motivation
I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6725/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6724/comments | https://api.github.com/repos/huggingface/datasets/issues/6724/events | https://github.com/huggingface/datasets/issues/6724 | 2,174,398,227 | I_kwDODunzps6Bmq8T | 6,724 | Dataset with loading script does not work in renamed repos | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | [] | 2024-03-07T17:38:38 | 2024-03-07T20:06:25 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line.
https://github.com/huggingface/dat... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6724/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6721/comments | https://api.github.com/repos/huggingface/datasets/issues/6721/events | https://github.com/huggingface/datasets/issues/6721 | 2,173,931,714 | I_kwDODunzps6Bk5DC | 6,721 | Hi,do you know how to load the dataset from local file now? | {
"login": "Gera001",
"id": 50232044,
"node_id": "MDQ6VXNlcjUwMjMyMDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/50232044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gera001",
"html_url": "https://github.com/Gera001",
"followers_url": "https://api.github.com/users/Gera00... | [] | open | false | null | [] | null | [] | 2024-03-07T13:58:40 | 2024-03-07T13:58:40 | null | NONE | null | null | null | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6721/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6720/comments | https://api.github.com/repos/huggingface/datasets/issues/6720/events | https://github.com/huggingface/datasets/issues/6720 | 2,173,603,459 | I_kwDODunzps6Bjo6D | 6,720 | TypeError: 'str' object is not callable | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"Hi ! I opened a PR to fix an issue in the Features defined in your code\r\n\r\nBasically changing\r\n```python\r\nSequence(\"float32\")\r\n```\r\n\r\nto\r\n```python\r\nSequence(Value(\"float32\"))\r\n```\r\n\r\n\r\nhttps://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/discussions/1",
"D'oh! Was wondering wh... | 2024-03-07T11:07:09 | 2024-03-08T07:34:53 | 2024-03-07T15:13:58 | CONTRIBUTOR | null | null | null | ### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6720/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6719/comments | https://api.github.com/repos/huggingface/datasets/issues/6719/events | https://github.com/huggingface/datasets/issues/6719 | 2,169,585,727 | I_kwDODunzps6BUUA_ | 6,719 | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | {
"login": "ssharpe42",
"id": 8136905,
"node_id": "MDQ6VXNlcjgxMzY5MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssharpe42",
"html_url": "https://github.com/ssharpe42",
"followers_url": "https://api.github.com/users/ss... | [] | open | false | null | [] | null | [] | 2024-03-05T15:55:13 | 2024-03-05T15:55:13 | null | NONE | null | null | null | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6719/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6717/comments | https://api.github.com/repos/huggingface/datasets/issues/6717/events | https://github.com/huggingface/datasets/issues/6717 | 2,168,726,432 | I_kwDODunzps6BRCOg | 6,717 | `remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio | {
"login": "jhauret",
"id": 53187038,
"node_id": "MDQ6VXNlcjUzMTg3MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/53187038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhauret",
"html_url": "https://github.com/jhauret",
"followers_url": "https://api.github.com/users/jhaure... | [] | open | false | null | [] | null | [
"And it also works well with `dataset = dataset.select_columns([\"audio\"])`"
] | 2024-03-05T09:33:26 | 2024-03-05T10:32:19 | null | NONE | null | null | null | ### Describe the bug
When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated.
### Steps to reproduce the bug
Minimal error code:
```python
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6717/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6716/comments | https://api.github.com/repos/huggingface/datasets/issues/6716/events | https://github.com/huggingface/datasets/issues/6716 | 2,168,706,558 | I_kwDODunzps6BQ9X- | 6,716 | Non-deterministic `Dataset.builder_name` value | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/fo... | [] | open | false | null | [] | null | [
"When `rotten_tomatoes` is printed out, the following warning message is also printed out:\r\n\r\n```\r\nYou can avoid this message in future by passing the argument `trust_remote_code=True`.\r\nPassing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.\r\n```... | 2024-03-05T09:23:21 | 2024-03-15T11:54:56 | null | NONE | null | null | null | ### Describe the bug
I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`:
```python
import datasets
for _ in range(100):
ds = datasets.load_dataset("rotten_tomatoes", split="train")
print(ds.builder_name) # pr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6716/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6703/comments | https://api.github.com/repos/huggingface/datasets/issues/6703/events | https://github.com/huggingface/datasets/issues/6703 | 2,163,250,590 | I_kwDODunzps6A8JWe | 6,703 | Unable to load dataset that was saved with `save_to_disk` | {
"login": "casper-hansen",
"id": 27340033,
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casper-hansen",
"html_url": "https://github.com/casper-hansen",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"`save_to_disk` uses a special serialization that can only be read using `load_from_disk`.\r\n\r\nContrary to `load_dataset`, `load_from_disk` directly loads Arrow files and uses the dataset directory as cache.\r\n\r\nOn the other hand `load_dataset` does a conversion step to get Arrow files from the raw data files... | 2024-03-01T11:59:56 | 2024-03-04T13:46:20 | 2024-03-04T13:46:20 | NONE | null | null | null | ### Describe the bug
I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.
### Steps to reproduce the bug
1. Save a dataset with `save_to_disk`
2. Try to load it with `load_datasets`
### Expected behavior
I am ab... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6703/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6702/comments | https://api.github.com/repos/huggingface/datasets/issues/6702/events | https://github.com/huggingface/datasets/issues/6702 | 2,161,938,484 | I_kwDODunzps6A3JA0 | 6,702 | Push samples to dataset on hub without having the dataset locally | {
"login": "jbdel",
"id": 17854096,
"node_id": "MDQ6VXNlcjE3ODU0MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbdel",
"html_url": "https://github.com/jbdel",
"followers_url": "https://api.github.com/users/jbdel/follow... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! For now I would recommend creating a new Parquet file using `dataset_new.to_parquet()` and upload it to HF using `huggingface_hub` every time you get a new batch of data. You can name the Parquet files `0000.parquet`, `0001.parquet`, etc.\r\n\r\nThough maybe make sure to not upload one file per sample since t... | 2024-02-29T19:17:12 | 2024-03-08T21:08:38 | 2024-03-08T21:08:38 | NONE | null | null | null | ### Feature request
Say I have the following code:
```
from datasets import Dataset
import pandas as pd
new_data = {
"column_1": ["value1", "value2"],
"column_2": ["value3", "value4"],
}
df_new = pd.DataFrame(new_data)
dataset_new = Dataset.from_pandas(df_new)
# add these samples to a remote datase... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6702/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6700/comments | https://api.github.com/repos/huggingface/datasets/issues/6700/events | https://github.com/huggingface/datasets/issues/6700 | 2,158,871,038 | I_kwDODunzps6ArcH- | 6,700 | remove_columns is not in-place but the doc shows it is in-place | {
"login": "shelfofclub",
"id": 32047804,
"node_id": "MDQ6VXNlcjMyMDQ3ODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/32047804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shelfofclub",
"html_url": "https://github.com/shelfofclub",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | [
"Good catch! I've opened a PR with a fix in the `transformers` repo.",
"@mariosasko Thanks!\r\n\r\nWill the doc of `datasets` be updated?\r\n\r\nI find some possible mistakes in doc about whether `remove_columns` is in-place.\r\n1. [You can also remove a column using map() with remove_columns but the present meth... | 2024-02-28T12:36:22 | 2024-02-29T03:02:54 | null | NONE | null | null | null | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
h... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6700/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6699/comments | https://api.github.com/repos/huggingface/datasets/issues/6699/events | https://github.com/huggingface/datasets/issues/6699 | 2,158,152,341 | I_kwDODunzps6AosqV | 6,699 | `Dataset` unexpected changed dict data and may cause error | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/fo... | [] | open | false | null | [] | null | [
"If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn... | 2024-02-28T05:30:10 | 2024-02-28T19:14:36 | null | NONE | null | null | null | ### Describe the bug
Will unexpected get keys with `None` value in the parsed json dict.
### Steps to reproduce the bug
```jsonl test.jsonl
{"id": 0, "indexs": {"-1": [0, 10]}}
{"id": 1, "indexs": {"-1": [0, 10]}}
```
```python
dataset = Dataset.from_json('.test.jsonl')
print(dataset[0])
```
Result:
```... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6699/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6697/comments | https://api.github.com/repos/huggingface/datasets/issues/6697/events | https://github.com/huggingface/datasets/issues/6697 | 2,157,322,224 | I_kwDODunzps6Alh_w | 6,697 | Unable to Load Dataset in Kaggle | {
"login": "vrunm",
"id": 97465624,
"node_id": "U_kgDOBc81GA",
"avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrunm",
"html_url": "https://github.com/vrunm",
"followers_url": "https://api.github.com/users/vrunm/followers",
... | [] | closed | false | null | [] | null | [
"FWIW, I run `load_dataset(\"llm-blender/mix-instruct\")` and it ran successfully.\r\nCan you clear your cache and try again?\r\n\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.17.0\r\n- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- `huggingface_hub` versi... | 2024-02-27T18:19:34 | 2024-02-29T17:32:42 | 2024-02-29T17:32:41 | NONE | null | null | null | ### Describe the bug
Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook.
Get this Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recen... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6697/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6695/comments | https://api.github.com/repos/huggingface/datasets/issues/6695/events | https://github.com/huggingface/datasets/issues/6695 | 2,154,075,509 | I_kwDODunzps6AZJV1 | 6,695 | Support JSON file with an array of strings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, bu... | 2024-02-26T12:35:11 | 2024-03-08T14:16:25 | 2024-02-28T06:39:13 | MEMBER | null | null | null | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6695/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6691/comments | https://api.github.com/repos/huggingface/datasets/issues/6691/events | https://github.com/huggingface/datasets/issues/6691 | 2,152,134,041 | I_kwDODunzps6ARvWZ | 6,691 | load_dataset() does not support tsv | {
"login": "dipsivenkatesh",
"id": 26873178,
"node_id": "MDQ6VXNlcjI2ODczMTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/26873178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dipsivenkatesh",
"html_url": "https://github.com/dipsivenkatesh",
"followers_url": "https://api.gi... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "harsh1504660",
"id": 77767961,
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harsh1504660",
"html_url": "https://github.com/harsh1504660",
"followers_url": "https://api.github.c... | [
{
"login": "harsh1504660",
"id": 77767961,
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harsh1504660",
"html_url": "https://github.com/harsh1504660",
"followers_url": "... | null | [
"#self-assign",
"Hi @dipsivenkatesh,\r\n\r\nPlease note that this functionality is already implemented. Our CSV builder uses `pandas.read_csv` under the hood, and you can pass the parameter `delimiter=\"\\t\"` to read TSV files.\r\n\r\nSee the list of CSV config parameters in our docs: https://huggingface.co/docs... | 2024-02-24T05:56:04 | 2024-02-26T07:15:07 | 2024-02-26T07:09:35 | NONE | null | null | null | ### Feature request
the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values).
### Motivation
cant easily load files of type tsv, have to convert them to another type like csv then load
### Your contribution
Can try by raising a PR with a little help, c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6691/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6690/comments | https://api.github.com/repos/huggingface/datasets/issues/6690/events | https://github.com/huggingface/datasets/issues/6690 | 2,150,800,065 | I_kwDODunzps6AMprB | 6,690 | Add function to convert a script-dataset to Parquet | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 2024-02-23T10:28:20 | 2024-02-23T10:28:20 | null | MEMBER | null | null | null | Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6690/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6689/comments | https://api.github.com/repos/huggingface/datasets/issues/6689/events | https://github.com/huggingface/datasets/issues/6689 | 2,149,581,147 | I_kwDODunzps6AIAFb | 6,689 | .load_dataset() method defaults to zstandard | {
"login": "ElleLeonne",
"id": 87243032,
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElleLeonne",
"html_url": "https://github.com/ElleLeonne",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n\r\nThat's why it asks for zstandard to be installed.\r\n\r\nThough I'm intrigued that you manage to load the dataset without zstandard installed. May... | 2024-02-22T17:39:27 | 2024-03-07T14:54:16 | 2024-03-07T14:54:15 | NONE | null | null | null | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6689/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6688/comments | https://api.github.com/repos/huggingface/datasets/issues/6688/events | https://github.com/huggingface/datasets/issues/6688 | 2,148,609,859 | I_kwDODunzps6AES9D | 6,688 | Tensor type (e.g. from `return_tensors`) ignored in map | {
"login": "srossi93",
"id": 11166137,
"node_id": "MDQ6VXNlcjExMTY2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/11166137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srossi93",
"html_url": "https://github.com/srossi93",
"followers_url": "https://api.github.com/users/sro... | [] | open | false | null | [] | null | [
"Hi, this is expected behavior since all the tensors are converted to Arrow data (the storage type behind a Dataset).\r\n\r\nTo get pytorch tensors back, you can set the dataset format to \"torch\":\r\n\r\n```python\r\nds = ds.with_format(\"torch\")\r\n```",
"Thanks. Just one additional question. During the pipel... | 2024-02-22T09:27:57 | 2024-02-22T15:56:21 | null | NONE | null | null | null | ### Describe the bug
I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument.
If this is an expected behaviour (e.g., fo... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6688/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6686/comments | https://api.github.com/repos/huggingface/datasets/issues/6686/events | https://github.com/huggingface/datasets/issues/6686 | 2,147,795,103 | I_kwDODunzps6ABMCf | 6,686 | Question: Is there any way for uploading a large image dataset? | {
"login": "zhjohnchan",
"id": 37367987,
"node_id": "MDQ6VXNlcjM3MzY3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhjohnchan",
"html_url": "https://github.com/zhjohnchan",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [] | 2024-02-21T22:07:21 | 2024-02-21T22:07:21 | null | NONE | null | null | null | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6686/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6679/comments | https://api.github.com/repos/huggingface/datasets/issues/6679/events | https://github.com/huggingface/datasets/issues/6679 | 2,141,953,981 | I_kwDODunzps5_q5-9 | 6,679 | Node.js 16 GitHub Actions are deprecated | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 2024-02-19T09:47:37 | 2024-02-28T06:56:35 | 2024-02-28T06:56:35 | MEMBER | null | null | null | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
> Node.js 16 actions are deprecat... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6679/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6676/comments | https://api.github.com/repos/huggingface/datasets/issues/6676/events | https://github.com/huggingface/datasets/issues/6676 | 2,140,648,619 | I_kwDODunzps5_l7Sr | 6,676 | Can't Read List of JSON Files Properly | {
"login": "lordsoffallen",
"id": 20232088,
"node_id": "MDQ6VXNlcjIwMjMyMDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordsoffallen",
"html_url": "https://github.com/lordsoffallen",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?",
"I don't think we should filter for `*.json` as this might silently rem... | 2024-02-17T22:58:15 | 2024-03-02T20:47:22 | null | NONE | null | null | null | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6676/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6675/comments | https://api.github.com/repos/huggingface/datasets/issues/6675/events | https://github.com/huggingface/datasets/issues/6675 | 2,139,640,381 | I_kwDODunzps5_iFI9 | 6,675 | Allow image model (color conversion) to be specified as part of datasets Image() decode | {
"login": "rwightman",
"id": 5702664,
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rwightman",
"html_url": "https://github.com/rwightman",
"followers_url": "https://api.github.com/users/rw... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.ca... | 2024-02-16T23:43:20 | 2024-03-01T19:43:55 | null | NONE | null | null | null | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.or... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6675/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6674/comments | https://api.github.com/repos/huggingface/datasets/issues/6674/events | https://github.com/huggingface/datasets/issues/6674 | 2,139,595,576 | I_kwDODunzps5_h6M4 | 6,674 | Depprcated Overview.ipynb Link to new Quickstart Notebook invalid | {
"login": "Codeblockz",
"id": 55932554,
"node_id": "MDQ6VXNlcjU1OTMyNTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Codeblockz",
"html_url": "https://github.com/Codeblockz",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Good catch! Feel free to open a PR to fix the link."
] | 2024-02-16T22:51:35 | 2024-02-25T18:48:09 | 2024-02-25T18:48:09 | CONTRIBUTOR | null | null | null | ### Describe the bug
For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken.
### Steps to reproduce the bug
Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quicksta... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6674/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6673/comments | https://api.github.com/repos/huggingface/datasets/issues/6673/events | https://github.com/huggingface/datasets/issues/6673 | 2,139,522,827 | I_kwDODunzps5_hocL | 6,673 | IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True` | {
"login": "rwightman",
"id": 5702664,
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rwightman",
"html_url": "https://github.com/rwightman",
"followers_url": "https://api.github.com/users/rw... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU... | open | false | null | [] | null | [] | 2024-02-16T21:38:12 | 2024-02-22T13:17:14 | null | NONE | null | null | null | ### Describe the bug
When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes.
PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6673/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6671/comments | https://api.github.com/repos/huggingface/datasets/issues/6671/events | https://github.com/huggingface/datasets/issues/6671 | 2,138,727,870 | I_kwDODunzps5_emW- | 6,671 | CSV builder raises deprecation warning on verbose parameter | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 2024-02-16T14:23:46 | 2024-02-19T09:20:23 | 2024-02-19T09:20:23 | MEMBER | null | null | null | CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6671/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6670/comments | https://api.github.com/repos/huggingface/datasets/issues/6670/events | https://github.com/huggingface/datasets/issues/6670 | 2,138,372,958 | I_kwDODunzps5_dPte | 6,670 | ValueError | {
"login": "prashanth19bolukonda",
"id": 112316000,
"node_id": "U_kgDOBrHOYA",
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prashanth19bolukonda",
"html_url": "https://github.com/prashanth19bolukonda",
"followers_url": "ht... | [] | closed | false | null | [] | null | [
"Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923",
"Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggin... | 2024-02-16T11:05:17 | 2024-02-17T04:26:34 | 2024-02-16T14:43:53 | NONE | null | null | null | ### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transf... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6670/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6669/comments | https://api.github.com/repos/huggingface/datasets/issues/6669/events | https://github.com/huggingface/datasets/issues/6669 | 2,138,322,662 | I_kwDODunzps5_dDbm | 6,669 | attribute error when writing trainer.train() | {
"login": "prashanth19bolukonda",
"id": 112316000,
"node_id": "U_kgDOBrHOYA",
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prashanth19bolukonda",
"html_url": "https://github.com/prashanth19bolukonda",
"followers_url": "ht... | [] | closed | false | null | [] | null | [
"Hi! Kaggle notebooks use an outdated version of `datasets`, so you should update the `datasets` installation (with `!pip install -U datasets`) to avoid the error.",
"Thank you for your response\r\n\r\nOn Thu, Feb 29, 2024 at 10:55 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Closed #6669 <https://github.com/hu... | 2024-02-16T10:40:49 | 2024-03-01T10:58:00 | 2024-02-29T17:25:17 | NONE | null | null | null | ### Describe the bug
AttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6669/timeline | null | completed |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 11