Skip to main content

Copyright, the learning issue and unethical corporate generated art

 


"The End of Art"

Proclaimed the Philosopher, and art-critic, Arthur Danto, after contemplating on Andy Warhol when he exhibited his Brillo Box in 1964.  

Arthur Danto argued that art has undergone a historical transformation from mimesis, or imitation, to self-consciousness. In the past, art was judged on its ability to accurately represent reality. However, with the rise of new artistic movements such as Cubism, Abstract Expressionism, and Pop Art, art began to focus more on subjective expression and the exploration of new forms. This shift in focus led to a new understanding of what art is and what it can do.

Danto argued that this process of self-consciousness is complete when art becomes aware of itself as art. This is what he calls the "end of art." He believes that the history of art is a history of the gradual realisation of the medium's own possibilities. When art becomes aware of itself, it can no longer progress in the same way. This does not mean that art will stop being made, but it does mean that it will no longer be driven by the same impulse to imitate reality.

The end of the end of art

Now things have changed once more, by the impulse of people using a non-artististic stochastic tool, that are created by the driven impulse of corporate entities to imitate art, or at least to generate profit. That isn't to say that AI generation tools can't be used by artitts and that the results of an artistic process aren't valid. It's just that like much of AI tools, the tools aren't quite up to the job, yet, We may soon have a new art movement, how much the tools are of importance to that remains to be seen. Photographic tools, for instance, were not considered sufficient for a photographer to be 'transformed' into an artist.

Photography began in the 1820's, but many artists and art critics still saw it as a threat to traditional art forms. The Victoria & Albert Museum in London became the first museum to hold a photography exhibition in 1858, but it took museums in the United States a while to come around. The Museum of Fine Arts, Boston, one of the first American institutions to collect photographs, didn’t do so until 1924. By the early 1940s, photography had officially become an art form in the United States, and it soon received the same consideration in Europe and beyond. This doesn't mean to say that because photography is now an accepted art form all photographers are artists. That would be like saying anyone who pushes a doorbell is an artist. Which is where copyright comes in. 

Although there is much to argue with copyright and copyrights law as applied, it does, through legal proceedings, more often than not, come to sensible conclusions on distinguishing between an art commissioner, an artist and say an arts worker. These distinctions have much of merit to them.

What follows is a quote from an excellent piece entitled 'The AI Art Apocalypse', from the Tim Hickson of the YouTube Channel, 'Hello Future Me'. Hickson is a New Zealand writer that has worked as a copyright lawyer, he has spent months research and thinking about this subject, as he tells us in the introduction to the video, and that is ably demonstrated. I have been wanting to cover art, AI and ethics ever since starting this blog. My bias, I was trained in art school for five years. I had avoided it, until now, as it was such a canyon sized rabbit hole in it's own right. Thankfully I waited as this video poses more questions than I had arranged in my head and does that well. 

'It's easy to imagine this as the next great creative revolution. However, it could just as easily be another corporate-dominated field that saturates the market while cutting out people whose work it was built on, both financially and creatively.

If we aren't careful, this misses the fact that we have already allowed certain corporate practices and technologies to become commonplace in society, which we now resent. Microtransactions, widespread digital surveillance, corporations owning and selling our data, gambling for children, and loot boxes are just a few examples of things that have sprung from new technologies being so quickly and excitedly embraced without thinking about how corporations or the like will use them. Digital privacy continues to be a painful clawback all across the globe, which companies continue to strongly lobby against.

I think this time we might just be ahead of the curve. Technology uplifts society, of course it does. But all too often, corporations monopolize it. They try to get away with everything they can ethically before the law catches up with them, by which time they are well-established leaders in their field, like they're doing now.

The stupid thing is, all of this situation could have been avoided if these AI were just trained right on Creative Commons material, or stuff in the public domain, or artwork volunteered by people who wanted to help.'

The full video is below. I could have picked out whole sections from this video to comment on. I'm largely in agreement with Hickson who goes through much of the debate in an accessible way, free from the jargon of the art establishment or of academia. Which is refreshing. The one thing about art schools is, they do love to preserve their mystique through their use of language. That's not the case in the following video which is two hours well spent.


So I share Hicksons idea, that generative 'art', as it currently stands, should be ruled as creative commons, or even free, unlicensed art. It delays the corporate power grab, at least until these tools require artistic intention to be a significant part of the creation process. Then that's a different matter. It's unfortunate, but we may have to rely upon legal clarifications for these tools to show their potential as part of artistic process and for, perhaps, the open source community to ethically train tools and to redesign the interfaces of such tools to allow for artistic processes to be input.


'

Comments

Popular posts from this blog

The AI Dilemma and "Gollem-Class" AIs

From the Center for Humane Technology Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. One of the more astute critics of the tech industry, Tristan Harris, who has recently given stark evidence to Congress. It is worth watching both of these videos, as the Congress address gives a context of PR industry and it's regular abuses. "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their

Beware the Orca, the challenge to ChatGPT and Palm2 is here

  So Google's 'we have no moat' paper was correct. If you train an LLM wisely then it's cost effective and cheap to produce a small LLM that is able to compete or even beat established, costly LLMs, as Microsoft has just found. It's another excellent video from AI Explained, who goes through some of the training procedures, which I won't get into here. Orca, is a model that learns from large foundation models (LFMs) like GPT-4 and ChatGPT by imitating their reasoning process. Orca uses rich signals such as explanations and complex instructions to improve its performance on various tasks. Orca outperforms other instruction-tuned models and achieves similar results to ChatGPT on zero-shot reasoning benchmarks and professional and academic exams. The paper suggests that learning from explanations is a promising way to enhance model skills. Smaller models are often overestimated in their abilities compared to LFMs, and need more rigorous evaluation methods. Explana

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in