A few days ago, the NIPS 2017 conference came to an end, but its influence still lingers. Recently, a fascinating project that was shortlisted for the NIPS 2017 Workshop on Machine Learning for Creativity and Design went viral on Twitter. The project was created by Osamu Akiyama, who introduced a method of converting images into ASCII art using Convolutional Neural Networks (CNNs). He also released an updated coloring tool, which has captured the attention of many machine learning enthusiasts.
The image above shows the transformation of ASCII art before and after coloring. The process involves two main steps: first, using CNNs to generate ASCII art, and second, applying color to it. For the first part, the author shared datasets and detailed explanations in a paper, while for the second part, he developed a simple online tool that allows users to upload their own ASCII images for coloring.
It's worth noting that this tool works best with clean, white-background ASCII art that has clear outlines. It only supports image uploads, not text documents.
In his paper, Osamu Akiyama described how he collected 500 ASCII characters from Japanese BBS sites like 5channel and Shitaraba to create a dataset. However, he faced a challenge: many users uploaded hand-drawn ASCII art without linking it to the original image, making it difficult for the algorithm to learn how to convert lines into text.
To solve this issue, Akiyama used a neural network-based cleanup tool to reverse-engineer the ASCII back into line drawings. This helped restore missing details, resulting in smoother and more continuous images. By training his model on these restored images, the network learned which characters were most suitable for generating artistic visuals.
His CNN architecture includes 7 convolutional layers, 3 max-pooling layers, 2 fully connected layers, and an output layer. Inspired by the VGG network, the structure is C64-C64-P-C128-C128-P-C256-C256-C256-P-FC4096-FC4096-O411.
After training, the generated images were compared with manually created ASCII art. As shown in the figure, the neural network outperformed other tools in terms of detail and contour accuracy.
The goal of generating ASCII art is to enable better coloring. According to tests, Akiyama’s coloring CNN performs well on clean and sharp line drawings. For example, when comparing manually drawn ASCII art with the neural network’s output, the latter produced smoother lines, which improved the overall coloring effect.
All materials, including the dataset and models, are available on GitHub at https://github.com/OsciiArt/DeepAA. To run the code, you'll need TensorFlow (1.3.0), Keras (2.0.8), NumPy (1.13.3), Pillow (4.2.1), Pandas (0.18.0), Scikit-learn (0.19.0), and H5py (2.7.1).
You can download the model and training data from Google Drive. After downloading, place them in the appropriate directories. To use the tool, modify the `output.py` file to point to your desired input image. Note that the image should be a dark gray line drawing.
If you want to use a lightweight version, adjust the model paths in `output.py` to use the lighter model files.
Yesterday, the author also released a new CNN-based coloring tool, which can add color to the ASCII images generated by the neural network. While the exact design isn't disclosed, you can try it at paintschainer.preferred.tech/index_en.html. The tool only accepts image uploads, and the results show a unique black-and-white world of ASCII art. Although the color palette is limited, the visual transformation is impressive and opens up new creative possibilities.
Cutting Machines,Cloth Cutting Machine,Laser Cutter,Cutter Machine Machine
Kunshan Bolun Automation Equipment Co., Ltd , https://www.bolunmachinery.com