Nowadays, AI is everywhere and drives the productivity and performance of all industries where it is applied. Likewise in the art industries, transforming a realistic image in an artistic way is now trendy and many social platforms are an integral part of it. In context, in this article we will discuss how neural networks are used in the art industry and we will look at the NN-based architecture called Paint Transformer which results in paint images being created by man from given natural images. . Here are the main listed points that need to be discussed in this article.
- Neural networks in the art industry
- How does Paint Transformer work?
- Implementing Paint Transformer
First, we will discuss how neural networks are used in art.
Neural networks in the art industry
Painting has been a fantastic way for humans to preserve what they experience or even how they picture the world since ancient times. Painting has long been recognized as requiring professional knowledge/skills and is difficult for a layman. Computer-assisted art production fills much of this void, allowing many of us to produce our own artistic works.
Neural networks are used by artists to augment or improve their artistic endeavors, as well as to generate entirely new works of art. Adobe’s Wetbrush technology erodes the boundary between digital art and painting, despite the fact that digital art has long been recognized as a unique medium.
Creating an effect that looks like paint on digital media used to be difficult, but Wetbrush uses algorithms and a physics simulation engine to create digital brushstrokes that have the feel and color of oil paint. Wet brush images can also be printed in three dimensions, simulating the effects of natural light in an oil painting.
Natural images can be transformed into artistic images via image models such as style transfer or image-to-image translation. Image creation is usually formulated as an optimization process in pixel space or pixel-by-pixel image mapping with neural networks in these previous methods.
Humans, on the other hand, create paintings by a stroke-by-stroke procedure, using brushes ranging from coarse to fine, unlike the pixel-by-pixel operations of neural networks. Getting machines to mimic such a stroke-by-stroke process has a lot of potential to produce more authentic and human paintings.
How does Transformer paint work?
In order to simulate the natural behavior of human paintings, Songhua Liu et al proposed a neural network-based architecture that obtains a painted image for a given realistic image. In this section we will discuss the architecture and in the next section we will see how we can implement it.
This model predicts a set of strokes and then renders them on the initial canvas to minimize the difference between the rendered image and the target natural image receives an initial canvas and a target natural image. At coarse to fine scales of K, the procedure is repeated. The output of the previous scale is the initial canvas at each scale, and this is called a forward feature set prediction problem.
a new self-training pipeline that uses synthesized stroke images. To be more specific, it first generates a backdrop image with randomly sampled strokes; then it generates a target image by randomly sampling a set of foreground strokes and rendering it onto a canvas image.
Thus, the learning goal of the feature predictor is to predict the foreground feature set and minimize the differences between the synthesized canvas image and the target image, with optimization occurring both at the stroke and pixel level.
Once trained, this self-trained Paint Transformer exhibits excellent generalization and can work with arbitrary natural images. The sample output shown below demonstrates how this lookahead method can produce higher quality paints at a lower cost than existing methods.
Implementing Paint Transformer
In this section we take a closer look at the implementation, the researcher’s team made us implement the images as above in just 2-3 steps. Before the implementation, let’s briefly summarize the procedure again.
This is a step-by-step method for predicting strokes. It predicts multiple strokes in parallel at each step in advance to minimize the difference between the current canvas and the original target image.
Paint Transformer is made up of two modules to accomplish this: Stroke Predictor and Stroke Renderer. Given a target image and an intermediate canvas image, Stroke Predictor generates a set of parameters to determine the current stroke set. Stroke rendering then generates the stroke image for each stroke and traces it onto the canvas, giving the image to the output path.
Now let’s get to the implementation. First, we will clone the Paint Transformers repository and set the working directory accordingly.
!git clone https://github.com/lucabeetz/PaintTransformer.git %cd PaintTransformer/inference
That’s it, we can then paint for any random natural image using the run_inferance module from the inference package.
from inference import run_inference input_img = '/content/jonathan-riley-VW8MUbHyxCU-unsplash.jpg' run_inference(input_path=input_img, model_path="model.pth", output_dir="/content", need_animation=True, resize_h=None, resize_w=None, serial=True)
Inside run_inference we first pass the target image path and then a path for the output results, and serial is set to True if we want animated results, the rest of the attributes are self explanatory. Below is our target image.
After the inference we can create the animated results of these strokes which will show how the paint formed and it can be done like,
import glob from PIL import Image # Set to dir with output images in_dir="/content/marine drive/*.jpg" out_path="/content/marine drive.gif" img, *imgs = [Image.open(f) for f in sorted(glob.glob(in_dir))] img.save(fp=out_path, format="GIF", append_images=imgs, save_all=True, duration=100, loop=0)
And below are the results,
The animated result,
Through this article, we have discussed how the art industry today is influenced by advanced technologies like AI. We briefly discussed neural paints and the different architects that can be used in art. We mainly focused on the Paint Transformer which is a neural painting architecture and we practically saw its result.