Skip to main content

Copyright, the learning issue and unethical corporate generated art

 


"The End of Art"

Proclaimed the Philosopher, and art-critic, Arthur Danto, after contemplating on Andy Warhol when he exhibited his Brillo Box in 1964.  

Arthur Danto argued that art has undergone a historical transformation from mimesis, or imitation, to self-consciousness. In the past, art was judged on its ability to accurately represent reality. However, with the rise of new artistic movements such as Cubism, Abstract Expressionism, and Pop Art, art began to focus more on subjective expression and the exploration of new forms. This shift in focus led to a new understanding of what art is and what it can do.

Danto argued that this process of self-consciousness is complete when art becomes aware of itself as art. This is what he calls the "end of art." He believes that the history of art is a history of the gradual realisation of the medium's own possibilities. When art becomes aware of itself, it can no longer progress in the same way. This does not mean that art will stop being made, but it does mean that it will no longer be driven by the same impulse to imitate reality.

The end of the end of art

Now things have changed once more, by the impulse of people using a non-artististic stochastic tool, that are created by the driven impulse of corporate entities to imitate art, or at least to generate profit. That isn't to say that AI generation tools can't be used by artitts and that the results of an artistic process aren't valid. It's just that like much of AI tools, the tools aren't quite up to the job, yet, We may soon have a new art movement, how much the tools are of importance to that remains to be seen. Photographic tools, for instance, were not considered sufficient for a photographer to be 'transformed' into an artist.

Photography began in the 1820's, but many artists and art critics still saw it as a threat to traditional art forms. The Victoria & Albert Museum in London became the first museum to hold a photography exhibition in 1858, but it took museums in the United States a while to come around. The Museum of Fine Arts, Boston, one of the first American institutions to collect photographs, didn’t do so until 1924. By the early 1940s, photography had officially become an art form in the United States, and it soon received the same consideration in Europe and beyond. This doesn't mean to say that because photography is now an accepted art form all photographers are artists. That would be like saying anyone who pushes a doorbell is an artist. Which is where copyright comes in. 

Although there is much to argue with copyright and copyrights law as applied, it does, through legal proceedings, more often than not, come to sensible conclusions on distinguishing between an art commissioner, an artist and say an arts worker. These distinctions have much of merit to them.

What follows is a quote from an excellent piece entitled 'The AI Art Apocalypse', from the Tim Hickson of the YouTube Channel, 'Hello Future Me'. Hickson is a New Zealand writer that has worked as a copyright lawyer, he has spent months research and thinking about this subject, as he tells us in the introduction to the video, and that is ably demonstrated. I have been wanting to cover art, AI and ethics ever since starting this blog. My bias, I was trained in art school for five years. I had avoided it, until now, as it was such a canyon sized rabbit hole in it's own right. Thankfully I waited as this video poses more questions than I had arranged in my head and does that well. 

'It's easy to imagine this as the next great creative revolution. However, it could just as easily be another corporate-dominated field that saturates the market while cutting out people whose work it was built on, both financially and creatively.

If we aren't careful, this misses the fact that we have already allowed certain corporate practices and technologies to become commonplace in society, which we now resent. Microtransactions, widespread digital surveillance, corporations owning and selling our data, gambling for children, and loot boxes are just a few examples of things that have sprung from new technologies being so quickly and excitedly embraced without thinking about how corporations or the like will use them. Digital privacy continues to be a painful clawback all across the globe, which companies continue to strongly lobby against.

I think this time we might just be ahead of the curve. Technology uplifts society, of course it does. But all too often, corporations monopolize it. They try to get away with everything they can ethically before the law catches up with them, by which time they are well-established leaders in their field, like they're doing now.

The stupid thing is, all of this situation could have been avoided if these AI were just trained right on Creative Commons material, or stuff in the public domain, or artwork volunteered by people who wanted to help.'

The full video is below. I could have picked out whole sections from this video to comment on. I'm largely in agreement with Hickson who goes through much of the debate in an accessible way, free from the jargon of the art establishment or of academia. Which is refreshing. The one thing about art schools is, they do love to preserve their mystique through their use of language. That's not the case in the following video which is two hours well spent.


So I share Hicksons idea, that generative 'art', as it currently stands, should be ruled as creative commons, or even free, unlicensed art. It delays the corporate power grab, at least until these tools require artistic intention to be a significant part of the creation process. Then that's a different matter. It's unfortunate, but we may have to rely upon legal clarifications for these tools to show their potential as part of artistic process and for, perhaps, the open source community to ethically train tools and to redesign the interfaces of such tools to allow for artistic processes to be input.


'

Comments

Popular posts from this blog

What is happening inside of the black box?

  Neel Nanda is involved in Mechanistic Interpretability research at DeepMind, formerly of AnthropicAI, what's fascinating about the research conducted by Nanda is he gets to peer into the Black Box to figure out how different types of AI models work. Anyone concerned with AI should understand how important this is. In this video Nanda discusses some of his findings, including 'induction heads', which turn out to have some vital properties.  Induction heads are a type of attention head that allows a language model to learn long-range dependencies in text. They do this by using a simple algorithm to complete token sequences like [A][B] ... [A] -> [B]. For example, if a model is given the sequence "The cat sat on the mat," it can use induction heads to predict that the word "mat" will be followed by the word "the". Induction heads were first discovered in 2022 by a team of researchers at OpenAI. They found that induction heads were present in

OpenAI's NSA Appointment Raises Alarming Surveillance Concerns

  The recent appointment of General Paul Nakasone, former head of the National Security Agency (NSA), to OpenAI's board of directors has sparked widespread outrage and concern among privacy advocates and tech enthusiasts alike. Nakasone, who led the NSA from 2018 to 2023, will join OpenAI's Safety and Security Committee, tasked with enhancing AI's role in cybersecurity. However, this move has raised significant red flags, particularly given the NSA's history of mass surveillance and data collection without warrants. Critics, including Edward Snowden, have voiced their concerns that OpenAI's AI capabilities could be leveraged to strengthen the NSA's snooping network, further eroding individual privacy. Snowden has gone so far as to label the appointment a "willful, calculated betrayal of the rights of every person on Earth." The tech community is rightly alarmed, with many drawing parallels to dystopian fiction. The move has also raised questions about

Prompt Engineering: Expert Tips for a variety of Platforms

  Prompt engineering has become a crucial aspect of harnessing the full potential of AI language models. Both Google and Anthropic have recently released comprehensive guides to help users optimise their prompts for better interactions with their AI tools. What follows is a quick overview of tips drawn from these documents. And to think just a year ago there were countless YouTube videos that were promoting 'Prompt Engineering' as a job that could earn megabucks... The main providers of these 'chatbots' will hopefully get rid of this problem, soon. Currently their interfaces are akin to 1970's command lines, we've seen a regression in UI. Constructing complex prompts should be relegated to Linux lovers. Just a word of caution, even excellent prompts don't stop LLM 'hallucinations'. They can be mitigated against by supplementing a LLM with a RAG, and perhaps by 'Memory Tuning ' as suggested by Lamini (I've not tested this approach yet).