AI – rules after opening Pandora‘s box? – Part II

The technological singularity – or simply the singularity – is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence, and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence. [1]

Does that happen right now – maybe, there are good arguments for a trend in evolutionary jumps which could lead there within 4 years. [10] But what does surely happen now seems to be an explosion of commercially driven and exploited specialised AI proliferation.

Photo licensed by iStock

Let’s be clear that specialised AI (Machine learning trained for a specific task) and generalised AI (Machine learning which exposes general intelligence, common knowledge and can be applied to understand and help in general tasks) needs to be separated. We are afraid of the latter, but we are so far only dealing with the first one, the specialised type. Chat GPT cannot drive a car, nor can it cook a meal or design a new drug – it can chat. Nevertheless, its application on written language including code may threaten the perception some jobs need, so that clients pay for it.

Former President of the United States Barack Obama spoke about singularity in his interview with Joi Ito and Scott Dadich, published in Wired in 2016: “One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularity—they are worrying about “Well, is my job going to be replaced by a machine?” (2)

That is the real question underneath legality: which jobs will get replaced by the growth of commercial, specialised AI platforms and the push of a button of users and why? Do we have a say in this or did Pandora’s Box get opened (without asking anyone) unleashing the beast and now the creatives of the world (filmmakers, poets, writers, animators, illustrators, photographers, 3D designers etc.) have to live with the consequences (without resistance)?

Pandora`s Box got opened.

Common people use this specialised generative AI to produce pictures, texts and videos, putting pressure on professionals who get paid to do that for clients.

Now, shall we apply some rules in the aftermath like in the music business, to curb the fallout? Or stick to the court cases which already won and defended the individual copyright?


An example where a copy/paste approach backfired comes from Italian popular culture, the famous and infamous Sanremo Music Festival inspiring the Eurovision Song contest.

During the 60th edition of the Sanremo Festival (2016), a graphic work entitled ‘The scent of the night’ depicting a stylised digital flower was used as a fixed scenography of the Sanremo stage, without the author’s authorisation having been requested. The author, architect Chiara Biancheri, had sued RAI before the Court of Genoa, claiming to be the creator of the work and alleging infringement of her copyright. Chiara Biancheri is a young Genoese architect who published digital creations under the stage name Lindelokse.

Illustration by Chiara Biancheri aka Lindelokse: Queen of the Rain (Apophysis 7x, photoshop for minor brightness adjustments)

RAI, on the other hand, argued that, since it was a digital work, that is to say, obtained by means of “artificial intelligence software”, it could not benefit from the protection attributed by the law to copyright, with the result that the graphic work would have to be classified as freely usable. To clarify, the programme used by the suing party Chiara Biancheri is the result of a mathematical algorithm by Apophisis, but a fractal generator not a product of artificial intelligence; if it were to be considered AI, then every digital work of art would be “AI”. Meanwhile, the use of computer techniques or systems for the creation of works does not preclude the recognition of the requirement of creativity under copyright law – but declaring a work made with the help of software a “creative” or “artistic” work which is copyrightable, would be an interesting legal finding – and Italy in a case-driven legal system it is now a precedent:

The judges of merit first, and then the Italian Supreme Court / Court of Cassation (Civil Cassation of 16/01/2023, no. 1107), ruled in favour of the protection of the work: the latter, in fact, is to be considered as a ‘creative work’, even if it is the result of a digital graphic processing, i.e. obtained through the use of software. The broadcaster RAI was therefore sentenced to pay Euro 40,000, as well as to remove the images relating to the Festival from its website. [3]

Photo “UN FIORE CHE SBOCCIA” La scenografia del Festival di Sanremo 2016 ideata da Renato Bocchino (Credit: Ansafoto, with illlustration by Chiara Biancheri

In fact, RAI, among other things, complained – albeit belatedly – that the work in question could not be qualified as an original work because it was generated by software, leading the Supreme Court to affirm that “… it would have been necessary to ascertain whether and to what extent the use of the tool (the software) had absorbed the creative elaboration of the artist who had used it”.

The Supreme Court would therefore seem to affirm, albeit only incidentally, that an artistic creation, albeit created ‘jointly’ by a natural person and a software. Some could argue that would mean therefore also by an Artificial Intelligence software, disregarding the different nature of AI services and the fractal software used in the Biancheri vs. RAI case. So, the interesting question could be raised how much a result of a prompt using LLMs (Large Language Models) could be attributable to the individual who used the AI – and therefore copyrighted.

The Supreme Court made a clarification during RAI’s appeal: the use of software per se does not exclude the human creativity necessary for copyright protection but, where this is challenged, a finding of fact will be needed to check whether and to what extent the use of the tool absorbed (“avesse assorbito”) the creativity of the artist who made use of the software (section 5.3). The facts were not considered in the case because the plea was only raised during the appeal and was inadmissible (section 5.2). [4]

Who judges that AI’s activity is not preponderant, weighing much more in the creation of each pixel than the text written on an AI Service platform, triggering an arbitrary result? This evaluation will and should be a matter of the courts, national and international laws, case law and copyright treaties, rather than the assumption and preference of a shareholder of a business, a CEO or a common user. Obviously, ascertaining whether a given creative work is predominantly attributable to the AI or the user will entail a careful case-by-case investigation into the characteristics and completeness of the process of “dreaming up/calculating/statistically weighing” a result by an AI/LLM/Machine Learning system and the instructions (prompts) provided by the user.


The dispute may burn down to the assessment of the balance how much arbitrariness or human command and control is involved in copyrightable material. In the US, DC District Court Judge Beryl A. Howell says human beings are an ‘essential part of a valid copyright claim.’ She was presiding over a lawsuit against the US Copyright Office after it refused a copyright to Stephen Thaler for an AI-generated image made with the Creativity Machine algorithm he’d created. In her decision, Judge Howell wrote that copyright has never been granted to work that was “absent any guiding human hand,” adding that “human authorship is a bedrock requirement of copyright.” That’s been borne out in past cases cited by the judge, like that one involving a monkey selfie. To contrast, Judge Howell noted a case in which a woman compiled a book from notebooks she’d filled with “words she believed were dictated to her” by a supernatural “voice” was worthy of copyright. [5]

Illustration by pixabay

Judge Howell did, however, acknowledge that humanity is “approaching new frontiers in copyright,” where artists will use AI as a tool to create new work. She wrote that this would create “challenging questions regarding how much human input is necessary” to copyright AI-created art, noting that AI models are often trained on pre-existing work. [5]

It may be of interest that the results of a prompt are never the same, but do change significantly every time they are used with an LLM. One could ask how much control and therefore authorship over the outcome it entails, if the results differ even using the exact some prompt again. Matt Hervey, Head of Artificial Intelligence (UK) at the law firm Gowling WLG (UK) assesses the main question as follows: Does copyright subsist in works created by the new breed of systems, such as DALL·E 2, Midjourney and Stability Diffusion? Here there is the extra complexity of whether a user’s text prompt is sufficient to meet the originality threshold for copyright in an artistic work.

In the UK, the closest cases are Cala Homes (South) Ltd v Alfred McAlpine Homes East Ltd (No.1) [1995] 7 WLUK 62, [1995] F.S.R. 818, [1996] C.L.Y. 3632, in which a person giving detailed verbal instructions was held to be a joint author of artistic works, and Kenrick & Co v Lawrence & Co, (1890) 25 Q.B.D. 99 (under the old UK law, the Copyright (Works of Art) Act, 1862).

These cases foreshadow the key question of fact of generative AI: whether a specific text prompt passes the threshold for originality. But both cases need to be approached with caution since they pre-date the harmonised approach to originality for the EU (then including the UK) in a number of EU Directives and CJEU judgments.” [6]

In the Cala Homes (South) Ltd case he judges noted (though without any express endorsement) that counsel supporting the case for joint authorship “conceded that mere instructions such as ‘paint me a yellow flower’ would not make the person who gives those instructions a joint author of the resultant painting. In that case all the skill and labour in composition would come from the painter.”

Fractal flower, Illustration by pixabay

Choosing the subject matter alone seems unlikely to suffice, particularly in the case of broad subject matter (such as “a flower”), as this contribution may fall the wrong side of the ideas vs expression distinction repeatedly seen in copyright treaties, laws and case law. This is consistent with Kenrick & Co v Lawrence & Co, in which the judge said “I think that I am upon very safe ground in saying that the mere choice of subject can rarely, if ever, confer upon the author of the drawing an exclusive right to represent the subject”.

Hervey further remarks that “In practice, it seems successful generative AI results can involve significant creative choices by users, which should make authorship more likely.”

Whatever “significant” and “creative” means if you cite “Impressionism” or “like Monet” or “like Giger” in a prompt, because you recognise these styles but have no idea how they came into being or how recreate them.

There are at least two disputes over authorship of AI generated works to watch, both in the US. First, the DABUS team’s appeal of US Copyright Office’s refusal to register A Recent Entrance to Paradise, an image created by the DABUS AI. Second, whether the US Copyright Office cancels its registration of Zarya of the Dawn, a graphic novel created/generated by Kristina Kashtanova using MidJourney. The USCO has given notice of potential cancellation and requested details of the human involvement in the creation of the work. [7] It currently appears the registration has been cancelled but only, at this stage, because of system error. [8]


One argument is that it spurs the ego of its users and triggers them to think they are pros, even though they do not have any skills to back this up. It fills a psychological void in “creative envy” where hard work to become a painter/graphic designer/animator etc. can be easily emulated by “everyone” and is therefore diminished in value. The message is: you do not need skills, you need only to pay for our service to make up for it, and boom, you are a pro creator and awesome. See Microsoft’s latest ad for the Superbowl homing in on exactly that: 

They tell you, you have a copilot, but that makes you the pilot, yeah? Wanna fly with a wannabe-aviator who works day by day in a pharmacy (supermarket/shoe shop/insurance company…), who types some sentences into a terminal and promises to fly you safely across the continent – or do you want the real human pilot, with experience?

It can be called “the death of skills” – and imagination, not the “moon moment” for it. Remember what happened after “man” was on the moon…practically nothing for ca. 50 years, until Musk sent his roadster into space (I am not a fan of his attitudes or “achievements” on the back of others, taking credit, but he is and was an interesting investor and capitalist troll, becoming the bane of unions). My point here is that a technological event creating buzz does not need to trickle down and spawn a “new age”. Being in awe is not enough. It’s about cultural impact, not just application and who gains a buck from it (of course the latter sounds exactly like the real driving factor). The craze about AI is a slow-burner coming out of the so called AI winter where Machine learning was around and bringing results, but not those flashy ones, you can use at home. The self-landing rockets of SpaceX are an example of successful Machine Learning and a milestone in reusability. AI is a marketing / branding craze to fuel commercial services on top of an extraction paradigm, which plagues our planet – and I cannot find a way to coin this a good thing. [9] That is an opinion and therefore this article is filed under “Viewpoint”.


I can imagine three legal ways how this might pan out – if it ever does. I call it the the three Olive branches of peace & war:

a) Works generated by specialised AI/LLMs/Machine Learning are getting considered illegal, because they made possible only by being trained on copyrighted material (the elephant in the room): the Box of Pandora gets closed again.

b) There is a partial joint authorship being conceded to the user, who elicits responses from specialised AI/LLMs/Machine Learning and gets a partial copyright for it. This is applied only in commercial cases, most of the users creation will get considered fair use. Pandora’s Box stays open but the disease gets treated and the real problems addressed.

c) No copyright is conceded concerning the training material, even though there was no explicit consent. The results from specialised AI/LLMs/Machine Learning will get declared copyrightable, also in commercialised products. Pandora’s Box stays open, the beast stays unleashed and can do whatever it wants.

I guess b) will become a reasonable paradigm in some way, but I definitely can see powerful forces pushing for c), recklessly. Too much depends on generative AI becoming AGI (Artificial General Intelligence) getting already integrated in weapon systems and deployed in defence and research trying to carve out an edge against other state players. If we achieve to give birth to a superintelligence or not, AI/LLM development will give a decisive economic and military advantage. China or Russia isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way? That may sound like a doomer’s slogan, but there is data and trend trajectory backing that up. [10]

It may be a late-capitalist battle for the spoils of the “next big thing”, not caring who gets left behind. The legal battle may be for billions of possible profits and billions of investments already being poured into the theme, in hope of quick returns. Shall we teach art students about LLMs/Machine Learning? Of course, but not just how to use them. If they become proficient in getting results, but no light is shed to understand what is happening inside of the black box this would turn out to be a grave mistake. If we teach artists how to use image generators, it still doesn’t teach students how to build their own algorithms. Teach them to code, not just to prompt. Not to find a sound legal understanding about what influence or “joint authorship” a user might have in the process of creating material with AI is an even worse mistake. No matter how complicated it may be, a lot of our future in the creative industries depends on it – and maybe the future of the free world.

I repeat, whether AI rules, or we rein its use in with rules, is up to the regulating bodies, political will and the assessment of courts independent from vested corporate interest. Until it is decided or remains undecided, we will live with the ambivalence and sword hanging over our heads – and yes, it hangs also over start-ups who would need a deregulated market to succeed. It is a legal question to decide which takes into account all the angles and consequences, we all have to live with.

Interesting times ahead, indeed.

  1. Vinge, Vernor. “The Coming Technological Singularity: How to Survive in the Post-Human Era” Archived on 10 April 2018 at the Wayback Machine, in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.
  2. Dadich, Scott. “Barack Obama Talks AI, Robo Cars, and the Future of the World”Wired. 12 October 2016. Archived by from the original on 3 December 2017. Retrieved 27 March 2024.
  3. 3.Di Dizzly, RAI contro AI ma non è intelligenza artificiale e perde in Cassazione,, SiTNewsfeel, 02 June 2023 Retrieved 27 March 2024
  4. Donna, Massimo, Il Diritto d’Autore, l’Intelligenza Artificiale e il Festival di Sanremo, Partner at Paradigma – Law & Strategy, Linkedin, 6 Feb 2023. Retrieved 27 March 2024.
  5. Wes Davis, AI-generated art cannot be copyrighted, rules a US federal judge 20 Aug 2023, Retrieved 28 March 2024.
  6. Matt Hervey, The Official Gowling WLG Blog, 6 Feb 2023. Retrieved 27 March 2024.
  7. U.S. Copyright Office Backtracks on Registration of Partially AI-Generated Work by Franklin Graves,  1 Nov 2022. Retrieved 28 March 2024.
  8. Copyright Office Pilot Public Records System Mistakenly Reflects Cancellation of Registration for AI Graphic Novel, by Franklin Graves 24 Jan 2023. Retrieved 27 March 2024.
  9. Casadoro-Kopp, Herwig Egon, The cheap truth of AI/ML, 21 Feb 2024. Retrieved 27 March 2024.
  10. Leopold Aschenbrenner, Situational Awareness June 2024, Retrieved 6 June 2024.


Peter Muttcoin
Scroll to Top