How Deep is Your Art: An Experimental Study on the Limits of Artistic Understanding in a Single-Task, Single-Modality Neural Network

  title={How Deep is Your Art: An Experimental Study on the Limits of Artistic Understanding in a Single-Task, Single-Modality Neural Network},
  author={Mahan Agha Zahedi and Niloofar Gholamrezaei and Alex Doboli},
Mathematical modeling and aesthetic rule extraction of works of art are complex activities. This is because art is a multidimensional, subjective discipline. Perception and interpretation of art are, to many extents, relative and open-ended rather than measurable. Following the explainable Artificial Intelligence paradigm, this paper investigated in a human-understandable fashion the limits to which a single-task, single-modality benchmark computer vision model performs in classifying… 

Inching Towards Automated Understanding of the Meaning of Art: An Application to Computational Analysis of Mondrian's Artwork

  • Alex DoboliMahan Agha ZahediNiloofar Gholamrezaei
  • Art
  • 2022
. Deep Neural Networks (DNNs) have been successfully used in classifying digital images but have been less succesful in classifying images with meanings that are not linear combinations of their



A deep-learning framework for human perception of abstract art composition

A deep-learning algorithm that attempts to capture the perceptual mechanism underlying composition in humans, relying on a robust behavioral marker with known relevance to higher-level vision: orientation judgements, shows that it captures relevant characteristics of human orientation perception across styles and granularities.

Recognizing Art Style Automatically in Painting with Deep Learning

The use of deep residual neural is investigated to solve the problem of detecting the artistic style of a painting and outperform existing approaches to reach an accuracy of 62 on the Wikipaintings dataset (for 25 different style).

Toward Discovery of the Artist's Style: Learning to recognize artists by their artworks

PigeoNET is shown to be capable of attributing previously unseen artworks to the actual artists with an accuracy of more than 70% and represents a fruitful approach for the future of computer-supported examination of artworks.

DeepArt: Learning Joint Representations of Visual Arts

This paper presents a unified framework, called DeepArt, to learn joint representations that can simultaneously capture contents and style of visual arts, and introduces Art500k, a large-scale visual arts dataset containing over 500,000 artworks.

Toward the Automatic Retrieval and Annotation of Outsider Art images: A Preliminary Statement

The preliminary experiments have provided motivation to think that, as is the case with traditional styles, Outsider Art can be computationally modelled with objective means by using training datasets and CNN models.

The Shape of Art History in the Eyes of the Machine

Surprisingly, the networks could place the works of art in a smooth temporal arrangement mainly based on learning style labels, without any a priori knowledge of time of creation, the historical time and context of styles, or relations between styles.

Classification of Style in Fine-Art Paintings Using Transfer Learning and Weighted Image Patches

A new efficient method is described that improves the classification accuracy of fine-art paintings compared to the existing baseline methods based on transfer learning and classification of sub-regions or patches of the painting.

Toward automated discovery of artistic influence

A comparative study of different classification methodologies for the task of fine-art style classification and a visualization of artists based on the similarity between their works, as well as investigating the question “Who influenced this artist?” by looking at his masterpieces and comparing them to others.

How transferable are features in deep neural networks?

This paper quantifies the generality versus specificity of neurons in each layer of a deep convolutional neural network and reports a few surprising results, including that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.

Compare the performance of the models in art classification

This study tested 7 different models on 3 different datasets under the same experimental setup to compare their art classification performances when either using or not using transfer learning, and achieved state-of-the-art performance in all classification tasks with three datasets.