Use CNN to convert graphics to ASCII character drawing and update the rendering tool

A few days ago, the NIPS 2017 conference came to an end, but its influence is still growing. Recently, a fun project that was shortlisted for the Machine Learning for Creativity and Design workshop at NIPS 2017 gained attention on Twitter. The project, developed by Osamu Akiyama, uses a convolutional neural network (CNN) to convert images into ASCII art and recently added a coloring tool that has sparked interest among machine learning enthusiasts. The process involves two main steps: first, using a CNN to generate ASCII character art, and second, applying color to it. The author shared the dataset and detailed explanations of the neural network in a paper, while also providing a coloring tool for users who already have ASCII art. This tool works best with clean, white-background ASCII images and does not support text documents. In his paper, Akiyama described how he collected 500 ASCII characters from Japanese BBS platforms like 5channel and Shitaraba. However, he faced a challenge: many users uploaded hand-drawn ASCII art without including the original image, making it difficult for algorithms to learn how to convert lines into text. To solve this, he used a neural network cleanup tool to reverse-engineer the ASCII back into line drawings, improving the quality and continuity of the images. His CNN architecture consists of 7 convolutional layers, 3 max-pooling layers, two fully connected layers, and an output layer, inspired by the VGG network. After training, the model produced high-quality ASCII art that outperformed existing tools in terms of detail and contour accuracy. To further enhance the visual appeal, Akiyama introduced a new coloring tool, which can be accessed via a website. While the exact design details are not disclosed, the tool allows users to upload their ASCII art and apply color, resulting in more vivid and visually appealing outputs. All the materials, including the dataset and trained models, are available on GitHub at [https://github.com/OsciiArt/DeepAA](https://github.com/OsciiArt/DeepAA). The project requires TensorFlow 1.3.0, Keras 2.0.8, NumPy 1.13.3, and other dependencies. Users can download the model and training data from Google Drive links provided in the repository. To run the code, simply update the `output.py` file with the path to your image. For a lightweight version, switch to the lighter model files. Once executed, the generated ASCII art will be saved in the `output/` directory. The coloring tool, accessible at [paintschainer.preferred.tech/index_en.html](http://paintschainer.preferred.tech/index_en.html), allows users to add color to their ASCII creations. Although the color options may seem limited, the results demonstrate the potential of combining machine learning with traditional ASCII art techniques. Overall, this project highlights the creative possibilities of AI and shows how deep learning can be used to transform digital art in unexpected ways. Whether you're an artist or a developer, exploring these tools offers a unique blend of technology and creativity.

Wire Winding & Tying Machine

Wire Winding & Tying Machine,Stator Winding Machine,Winding Machine Motor,Cable Coiling Machine

Kunshan Bolun Automation Equipment Co., Ltd , https://www.bolunmachinery.com

Posted on