Skip to main content

Copyright, the learning issue and unethical corporate generated art

 


"The End of Art"

Proclaimed the Philosopher, and art-critic, Arthur Danto, after contemplating on Andy Warhol when he exhibited his Brillo Box in 1964.  

Arthur Danto argued that art has undergone a historical transformation from mimesis, or imitation, to self-consciousness. In the past, art was judged on its ability to accurately represent reality. However, with the rise of new artistic movements such as Cubism, Abstract Expressionism, and Pop Art, art began to focus more on subjective expression and the exploration of new forms. This shift in focus led to a new understanding of what art is and what it can do.

Danto argued that this process of self-consciousness is complete when art becomes aware of itself as art. This is what he calls the "end of art." He believes that the history of art is a history of the gradual realisation of the medium's own possibilities. When art becomes aware of itself, it can no longer progress in the same way. This does not mean that art will stop being made, but it does mean that it will no longer be driven by the same impulse to imitate reality.

The end of the end of art

Now things have changed once more, by the impulse of people using a non-artististic stochastic tool, that are created by the driven impulse of corporate entities to imitate art, or at least to generate profit. That isn't to say that AI generation tools can't be used by artitts and that the results of an artistic process aren't valid. It's just that like much of AI tools, the tools aren't quite up to the job, yet, We may soon have a new art movement, how much the tools are of importance to that remains to be seen. Photographic tools, for instance, were not considered sufficient for a photographer to be 'transformed' into an artist.

Photography began in the 1820's, but many artists and art critics still saw it as a threat to traditional art forms. The Victoria & Albert Museum in London became the first museum to hold a photography exhibition in 1858, but it took museums in the United States a while to come around. The Museum of Fine Arts, Boston, one of the first American institutions to collect photographs, didn’t do so until 1924. By the early 1940s, photography had officially become an art form in the United States, and it soon received the same consideration in Europe and beyond. This doesn't mean to say that because photography is now an accepted art form all photographers are artists. That would be like saying anyone who pushes a doorbell is an artist. Which is where copyright comes in. 

Although there is much to argue with copyright and copyrights law as applied, it does, through legal proceedings, more often than not, come to sensible conclusions on distinguishing between an art commissioner, an artist and say an arts worker. These distinctions have much of merit to them.

What follows is a quote from an excellent piece entitled 'The AI Art Apocalypse', from the Tim Hickson of the YouTube Channel, 'Hello Future Me'. Hickson is a New Zealand writer that has worked as a copyright lawyer, he has spent months research and thinking about this subject, as he tells us in the introduction to the video, and that is ably demonstrated. I have been wanting to cover art, AI and ethics ever since starting this blog. My bias, I was trained in art school for five years. I had avoided it, until now, as it was such a canyon sized rabbit hole in it's own right. Thankfully I waited as this video poses more questions than I had arranged in my head and does that well. 

'It's easy to imagine this as the next great creative revolution. However, it could just as easily be another corporate-dominated field that saturates the market while cutting out people whose work it was built on, both financially and creatively.

If we aren't careful, this misses the fact that we have already allowed certain corporate practices and technologies to become commonplace in society, which we now resent. Microtransactions, widespread digital surveillance, corporations owning and selling our data, gambling for children, and loot boxes are just a few examples of things that have sprung from new technologies being so quickly and excitedly embraced without thinking about how corporations or the like will use them. Digital privacy continues to be a painful clawback all across the globe, which companies continue to strongly lobby against.

I think this time we might just be ahead of the curve. Technology uplifts society, of course it does. But all too often, corporations monopolize it. They try to get away with everything they can ethically before the law catches up with them, by which time they are well-established leaders in their field, like they're doing now.

The stupid thing is, all of this situation could have been avoided if these AI were just trained right on Creative Commons material, or stuff in the public domain, or artwork volunteered by people who wanted to help.'

The full video is below. I could have picked out whole sections from this video to comment on. I'm largely in agreement with Hickson who goes through much of the debate in an accessible way, free from the jargon of the art establishment or of academia. Which is refreshing. The one thing about art schools is, they do love to preserve their mystique through their use of language. That's not the case in the following video which is two hours well spent.


So I share Hicksons idea, that generative 'art', as it currently stands, should be ruled as creative commons, or even free, unlicensed art. It delays the corporate power grab, at least until these tools require artistic intention to be a significant part of the creation process. Then that's a different matter. It's unfortunate, but we may have to rely upon legal clarifications for these tools to show their potential as part of artistic process and for, perhaps, the open source community to ethically train tools and to redesign the interfaces of such tools to allow for artistic processes to be input.


'

Comments

Popular posts from this blog

The Whispers in the Machine: Why Prompt Injection Remains a Persistent Threat to LLMs

 Large Language Models (LLMs) are rapidly transforming how we interact with technology, offering incredible potential for tasks ranging from content creation to complex analysis. However, as these powerful tools become more integrated into our lives, so too do the novel security challenges they present. Among these, prompt injection attacks stand out as a particularly persistent and evolving threat. These attacks, as one recent paper (Safety at Scale: A Comprehensive Survey of Large Model Safety https://arxiv.org/abs/2502.05206) highlights, involve subtly manipulating LLMs to deviate from their intended purpose, and the methods are becoming increasingly sophisticated. At its core, a prompt injection attack involves embedding a malicious instruction within an otherwise normal request, tricking the LLM into producing unintended – and potentially harmful – outputs. Think of it as slipping a secret, contradictory instruction into a seemingly harmless conversation. What makes prompt inj...

The Future of Work in the Age of AGI: Opportunities, Challenges, and Resistance

 In recent years, the rapid advancement of artificial intelligence (AI) has sparked intense debate about the future of work. As we edge closer to the development of artificial general intelligence (AGI), these discussions have taken on a new urgency. This post explores various perspectives on employment in a post-AGI world, including the views of those who may resist such changes. It follows on from others I've written on the impacts of these technologies. The Potential for Widespread Job Displacement Avital Balwit, an employee at Anthropic, argues in her article " My Last Five Years of Work " that AGI is likely to cause significant job displacement across various sectors, including knowledge-based professions. This aligns with research by Korinek (2024), which suggests that the transition to AGI could trigger a race between automation and capital accumulation, potentially leading to a collapse in wages for many workers. Emerging Opportunities and Challenges Despite the ...

Can We Build a Safe Superintelligence? Safe Superintelligence Inc. Raises Intriguing Questions

  Safe Superintelligence Inc . (SSI) has burst onto the scene with a bold mission: to create the world's first safe superintelligence (SSI). Their (Ilya Sutskever, Daniel Gross, Daniel Levy) ambition is undeniable, but before we all sign up to join their "cracked team," let's delve deeper into the potential issues with their approach. One of the most critical questions is defining "safe" superintelligence. What values would guide this powerful AI? How can we ensure it aligns with the complex and often contradictory desires of humanity?  After all, "safe" for one person might mean environmental protection, while another might prioritise economic growth, even if it harms the environment.  Finding universal values that a superintelligence could adhere to is a significant hurdle that SSI hasn't fully addressed. Another potential pitfall lies in SSI's desire to rapidly advance capabilities while prioritising safety.  Imagine a Formula One car wi...