And accurate. It's just re-scaling and palette indexing, no different than what people toyed around with using Photoshop in 1998. It looks nothing at all like hand-pixeled art.
> Pyxelate downsamples images by (iteratively) dividing it to 3x3 tiles and calculating the orientation of edges inside them. Each tile is downsampled to a single pixel value based on the angle the magnitude of these gradients, resulting in the approximation of a pixel art. This method was inspired by the Histogram of Oriented Gradients computer vision technique.
> Then an unsupervised machine learning method, a Bayesian Gaussian Mixture model is fitted (instead of conventional K-means) to find a reduced palette. The tied gaussians give a better estimate (than Euclidean distance) and allow smaller centroids to appear and then lose importance to larger ones further away. The probability mass function returned by the uncalibrated model is then used as a basis for different dithering techniques.
> Preprocessing and color space conversion tricks are also applied for better results.
But yeah for all the effort, it doesn't look that much better than just exporting with a small palette and positional dithering.
I'm kinda curious how this compares to imagemagick -> limit the colorspace. Though getting imagemagick to produce some of those styles of images may be difficult or impossible (like the purple/pink pattern on the bottom left corgi).