Pengyuan Lyu,Xiang Bai,Cong Yao,Zhen Zhu,Tengteng Huang,Wenyu Liu
Abstract URL: http://arxiv.org/abs/1706.08789v1
In this paper, we investigate the Chinese calligraphy synthesis problem:
synthesizing Chinese calligraphy images with specified style from standard
font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke
extraction and assemble pipeline which is complex in the process and limited by
the effect of stroke extraction. We treat the calligraphy synthesis problem as
an image-to-image translation problem and propose a deep neural network based
model which can generate calligraphy images from standard font images directly.
Besides, we also construct a large scale benchmark that contains various styles
for Chinese calligraphy synthesis. We evaluate our method as well as some
baseline methods on the proposed dataset, and the experimental results
demonstrate the effectiveness of our proposed model.