Skip to main content

Posts

Showing posts from April 23, 2023

Creative Industries, the Initial disruption

Creative Industries, the initial disruption Over a decade ago I worked in an economic development unit for a consortium of local authorities with responsibility for the creative industries. Just prior to that I was a Cultural Policy Officer. So it was interesting to read the NESTA led report on 'The State of Creativity, policy research industry' report, 2023, to see how a lead body in the UK is responding to the threats and opportunities of AI in the sector. Report Summary The State of Creativity is a report that reflects on the creative industry policy over the last 10 years and asks where next for the creative sector. It was published by the Creative Industries Policy and Evidence Centre (PEC) in 2021 and includes contributions from 24 creative industry thinkers from seven UK universities and across the creative sector. The report covers four main themes: innovation, skills, diversity and place. It explores the challenges and opportunities that the creative industries face

The EU AI Act has finally been Passed; Towards a legislative framework for AI.

  Towards a legislative framework for AI. The EU have finally passed the EU AI Act. The following represent most complete attempts at a legislative approach to AI regulation I've so far come across. OECD Artificial Intelligence Principles UNESCO Ethics of Artificial Intelligence  EU AI Act OSTP Blueprint for an AI Bill of Rights The EU AI Act will probably have the greatest impact, for now. But for today I want to concentrate upon the UNESCO Ethics against the provisions set out in the EU AI Act. Both the UNESCO Recommendation on the Ethics of Artificial Intelligence and the EU AI Act aim to guide the development of ethical AI. The UNESCO recommendation outlines 10 principles, Proportionality and Do No Harm Safety and security Fairness and non-discrimination Sustainability Right to Privacy, and Data Protection Human oversight and determination  Transparency and explainability Responsibility and accountability Awareness and literacy  Multi-stakeholder and adaptive governance and col

Why Machines Will Never Rule the World

"Jobst Landgrebe is a scientist and entrepreneur with a background in philosophy, mathematics, neuroscience, and bioinformatics. Landgrebe is also the founder of Cognotekt, a German AI company which has since 2013 provided working systems used by companies in areas such as insurance claims management, real estate management, and medical billing. After more than 10 years in the AI industry, he has developed an exceptional understanding of the limits and potential of AI in the future. Barry Smith is one of the most widely cited contemporary philosophers. He has made influential contributions to the foundations of ontology and data science, especially in the biomedical domain. Most recently, his work has led to the creation of an international standard in the ontology field (ISO/IEC 21838), which is the first example of a piece of philosophy that has been subjected to the ISO standardization process." In their book 'Why Machines Will Never Rule the World - Artificial lntell

Deceptions: how the language used by tech deceives

  In response to the quick article on this mornings Radio 4 Misnomers: The terms used to market the field of AI tend to be misnomers, in commonly understood terminology. Let’s start with Artificial Intelligence . The definition of ‘intelligence’ is “the ability to learn, understand and think in a logical way about things; the ability to do this well. “ AI neither understands nor thinks, instead it redefines AI in it’s own terms as:  the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. In these redefined terms the term AI works, but what it certainly isn’t is intelligent in the true sense. Hence AI is a contested term. Neural Networks : has a few competing definitions, along the lines of ‘a computer system which is designed to work in a similar way to the human brain and nervous system’.  What we are really defining is an artificial ne

The Alignment problem: should we treat AI like domestic dogs?

  Deep Agential Diversity is a term used by Luise Muller in her paper “ Domesticating Artificial Intelligence ” to describe the property of social systems that contain human as well as nonhuman agents. In such systems, agents cooperate and work together in a number of different constellations and differ categorically in their agential capabilities, vulnerabilities, and moral standing. Muller argues that this diversity is “deep” because the differences between humans and AI agents are not just a matter of degree, but of kind. She suggests that we need to develop normative theories that are adequate for social systems that are populated by different kinds of agents exhibiting heterogeneity in abilities, autonomy, moral capability, moral status and vulnerability.  “And because of that, we lack the methodological tools to understand social systems that are characterized by what I want to call deep agential diversity. The term denotes the property of social systems that contain human as wel

Do you gamble? In the quest to become 'Gods' this gamble is critical

"Paul Christiano runs the Alignment Research Center , a non-profit research organization whose mission is to align future machine learning systems with human interests. Paul previously ran the language model alignment team at OpenAI, the creators of ChatGPT."  The start of this video on the Bankless channel is shocking, especially given the position that Christiano previously held:  'overall maybe you are getting like 50/50 chance of doom shortly after you have AS systems that are at human level.' Christiano doesn't get too much more optimistic, as you can imagine from the above prediction: 'My default picture is like we have time to react in terms of the nature of AI systems changing, their capabilities changing. With luck we have some various kinds of smaller catastrophes occurring in advance, but I think that one of the bad things about the actual catastrophe we are worried about, does have these dynamics similar to like a human coup or revolution, where w

Don't Believe The (AI) Hype?

  Upper Echelon LLC, the makers of this video, are a gaming company specializing in community development and event management. This video is refreshing and in the minority on YouTube currently, as it's programmers critiquing the industry of the AI industry. Zakrzeswski is a well known critic of games with his YouTube channel. He has said "I've thought about taking that edge off or thought about reducing the amount of flammable rhetoric or incendiary things that I say, but I don't see myself ever doing it."  On AutoGPT 'It simply can not operate at or near the average intelligence of a human being when it comes to creative thinking. But it can do quite a few simple things. For example AutoGPT agents which are being spun up all over the world, by basically anyone with a laptop and time on their hands now, can scrape the internet, cross reference information and write their own version of it in seconds." "AutoGPT which is simply an assistant that spins

Perplexity - a web enabled GPT

Perplexity is a relatively new GPT that is web enabled and, importantly, quotes the sources of where the information was gleaned from. What is particularly interesting, beyond the results, is that it can plug into an OpenAI account, should you want, but more importantly the company behind Perplexity publishes the names and sources of the primary financial backers of the project. It's already available for iPhones and soon, it seems, for Android.  The search screen prompt is well tailored to quick searches too, which adds to its usability. Below is a sample of it's findings from an initial prompt followed by one of its suggested follow on prompts. It really is easy to use with one of the best UX's available for its type.