Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

MNIST is a classic image classification exercise - a dataset of 60,000 training images and 10,000 testing images where each image is a handwritten numeral as a 28x28 pixel grayscale image.

The challenge is to build a computer vision model that can tell which numeral each handwritten digit represents.

https://en.wikipedia.org/wiki/MNIST_database

78% accuracy on a solution is pretty bad, but achieving it just using GZIP is a very neat hack.



What's the average human performance for this task?


I don't know off hand, but go take a look at the images - I would expect near 100%.


Sadly,there are several errors in the labeled data, so no one should get 100%.

See https://labelerrors.com/


Just looking at a few of those I think I see them mostly as MNIST reports them? But yes, no one could get 100% due to ambiguity.

Very neat site though, I appreciate you showing that to me


thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: