Ben Shankles PRO
warshanks
AI & ML interests
MLX, AWQ / AI on the edge and in Healthcare
Recent Activity
updated a model 6 days ago
warshanks/talkie-1930-13b-it-mlx-bf16 updated a collection 6 days ago
Talkie MLX published a model 6 days ago
warshanks/talkie-1930-13b-it-mlx-bf16Organizations
README Typo?
2
#8 opened 3 months ago
by
warshanks
Update with vision support
👍 2
4
#3 opened 5 months ago
by
warshanks
Official llama.cpp support merged
1
#1 opened 7 months ago
by
warshanks
Sensitive to Quantization
2
#1 opened 8 months ago
by
warshanks
Improve model card: Add pipeline tag, correct base model, and add code/project links
2
#1 opened 8 months ago
by
nielsr
Issue with llama.cpp
18
#3 opened 8 months ago
by
wsbagnsv1
Avoid demo to be embedded in other sites
#8 opened 8 months ago
by
osanseviero
Feature Request: Disable reasoning
👀 1
3
#22 opened 9 months ago
by
SomAnon
Quantization Script
2
#1 opened 9 months ago
by
kawchar85
Model size?
2
#1 opened 9 months ago
by
warshanks
Convert in bf16 or fp16?
2
#2 opened 9 months ago
by
remember2015
Missing preprocessor_config.json
5
#2 opened 11 months ago
by
warshanks
tokenizer_config.json is not correct
12
#1 opened 11 months ago
by
depasquale
chat_template in tokenizer_config.json?
1
#1 opened 11 months ago
by
nff
mlx-community/medgemma-27b-text-it-bf16 is entirely broken on mlx-lm
👍 1
8
#1 opened 12 months ago
by
sjug
Convert model with mlx-vlm instead of mlx-lm to enable vision capabilities
3
#1 opened 12 months ago
by
ljoana
Convert model with mlx-vlm instead of mlx-lm to enable vision capabilities
3
#1 opened 12 months ago
by
ljoana
mlx-community/medgemma-27b-text-it-bf16 is entirely broken on mlx-lm
👍 1
8
#1 opened 12 months ago
by
sjug