Blog  ⌇ What was I thinking…?

(Un)read in the ledger: Monday 21–27 October 2024

This blog post was published on

A pattern made up of a repeated icon of three books in a pile on top of each other. The top book has an orange cover, the middle book has a green cover and the bottom book has a yellow cover. The piles of books are on a purple background. The piles of books are on a bright orange background.

My weekly reading list

So much AI again this week: privacy guidance for AI developers, an in-depth exploration of being an artists in an age of AI and Musk sued over infringing AI images.


Read

I am in Naarm/Melbourne for the Australian Internet Governance Forum tomorrow so I am publishing this week’s reading list from a hotel room in Chinatown. What I’ve been reading this week:

Guidance on privacy and developing and training generative AI models

OAIC published comprehensive privacy guidance for AI developers.

The Office of the Australian Information Commissioner (OAIC) has released guidance on privacy in relation to developing and training generative AI models. It is big and there is a lot in it, but it looks at how and why AI develops should take a ‘privacy by design’ approach, that they have obligations to ensure the accuracy of personal information, they should strive to clearly and transparently inform people about how they collect personal information and how they will use it, including where the purpose for which it was collected is not primarily for the purpose of training or fine-tuning an AI model.

Because it is such a comprehensive document I have created a separate summary as a resource.

Office of the Australian Information Commissioner, Australian Government

Also worth reading on this topic:

New AI guidance makes privacy compliance easier for business

OAIC

Can personal information be used to develop or train GenAI?

Blog, OIAC

AI art: The end of creativity or the start of a new movement?

Amidst all the hubbub about AI and art this wide-ranging article looks at how AI is challenging and inspiring creativity, authorship amd human expression.

There are many conversations about AI going on. So many conversations. There are plenty exploring copyright and AI, privacy and AI, hallucinations, the potential for harm, AI being used for nefarious purposes and so many other topics. One area that is also in discussion, albeit a little lost in the morass, is how and if AI-generated content challenges the idea of human creativity and the definition of art.

This well developed piece by Claudia Baxter looks at the many dimensions of these questions. It draws some interesting parallels to other challenging moments in the history of art, such as Marcel Duchamp exhibiting a porcelain urinal as art. Is the suggestion true that by selecting an object and labelling it art it is art?

“… AI-created artworks are disrupting the accepted norms of the art world. As philosopher Alice Helliwell from Northeastern University London argues, if we can consider radical and divergent pieces like Duchamp’s urinal and Tracey Emin’s bed as art proper, how can something created by a generative algorithm be dismissed? After all, both were controversial at the time and contain objects that haven’t technically been created by an “artist’s” hand.”

“Historically, the way we understand the definition of art has shifted,” says Heliwell. “It is hard to see why a urinal can be art, but art made by a generative algorithm could not be.”

The article describes the technology, processes and body of work of humanoid robot artist Ai-Da and how it ‘makes’ art. Later, the article goes on to say, “Human[ artists] are just as prone to behaving like machines, repeating old behaviours and getting bogged down with rules, like a painter or musician locked into a particular style.” It’s an interesting analogy. At various points the article repeates claims that AI can reinvigorate human creativity; as a creative medium, a creative outlet and a creative collaborator. 

Baxter reminds us that “There is historical precedence for new technology liberating us from our creative shackles”, using photography’s invention in the 1800s to illustrate the point.

“Some artists saw the camera as the antithesis of an artist, and photographs as the mortal enemy of the art establishment.

But instead of replacing painting, photography became a catalyst in the development of the experimental modern art movement of the 20th Century, as artists moved away from realism towards abstraction, a shift that paved the way for the contemporary art of today.”

Back to Ai-Da, Baxter asks questions about whether a robot making art is an artist in its own right, linking this line of enquiry to the concept of authorship. Of course, whether by design or unintentionally, by adopting the language of copyright law here Baxter is straying into important conversations going on at the moment in the copyright space around human authorship in a world of AI. ⟨ This is a conversation I and many others have been engaged in. Determining exactly how, when and if users of an AI system should be granted copyright protection over the AI outputs they generate is a complex and contested issue. Certainly, there needs to be clarification of existing copyright principles – particularly originality and human authorship – and how they playout in relation to the use of AI.

What, if any, threshold of originality applies to AI outputs? Is that different to creating without AI? When can AI outputs be attributed to the creator’s skill, labour or judgement? What factors may be relevant to these questions? The level of specificity and expressiveness of a prompt and its ability to achieve a desired type of outcome? The number of prompts or iterations that were used? Whether the final material is a composite of multiple AI outputs?

And what about authorship? Is the mere presence of a human in the process enough? Is a simple initial prompt enough? Or is a higher level of human interaction required? What may evidence the requisite level? Demonstrating conceptualisation of the output before using AI? Iterating an output through further prompts? Direct addition to or manipulation of the output by a user? Perhaps in time these and other questions will start to establish the contours of copyright protection for AI outputs.

But I digress. Back to Baxter’s article. ⟩

Also interesting in the piece is an exploration of the idea that “[t]he machine-learning processes used to train generative AI algorithms may be a creative process themselves.” What’s going on in those AI black boxes? Is it computation or creativity? Or both?

On a side note, I hadn’t heard of the AI v the Mind series this article is part of but it sounds interesting:

“a series that aims to explore the limits of cutting-edge AI, and learn a little about how our own brains work along the way. Each article will pit a human expert against an AI tool to probe a different aspect of cognitive ability. Can a machine write a better joke than a professional comedian, or unpick a moral conundrum more elegantly than a philosopher? We hope to find out.”

BBC

‘Blade Runner 2049’ Producers Sue Elon Musk, Tesla and Warner Bros. Discovery, Alleging Copyright Infringement

Musk wanted to tied the launch of the Robotaxi to Blade Runner 2049 and turned to AI when they were turned down by the production company.

Alcon Entertainment who produced Blade Runner 2049 are seeking an injunction and damages for copyright infringement and false endorsement against Elon Musk, Tesla and Warner Bros. Discovery because they allegedly used AI-generated images that look like scenes from the film as part of the launch of Tesla’s self-driving Robotaxi. Apparently the defendants sought permission but were knocked back by Alcon, so they generated something similar using AI for their event on Thursday 10 October 2024. In its filing Alcon adamantly insisted that there be no affiliation between the film, Ryan Gosling, Harrison Ford and “Tesla, X, Musk or any Musk-owned company”. The injunction seeks to block Musk, Tesla, WBD and “anyone working in concert with them from further copying, displaying, distributing, selling or offering to sell ‘BR2049’ or protectible elements thereof in connection with Tesla or Musk, or making derivative works thereof for such purposes.”

What strikes me most about this situation is that it flys in the face of one of the fundamental principles of copyright – that a copyright owner has the right to grant or not grant permission to use their copyright protected material – and it so obviously demonstrates that AI can be used to infringe copyright. In fact, the Blade Runner 2049 Style: italics makers go so far as to claim that one of the images in Musk presentation at the launch “… appears to have been generated by AI based on official stills from “Blade Runner 2049” or “some closely equivalent input direction””.

What is also telling is how vehemently the film production company wants to distance itself from Musk as a person: “in addition to “more ordinary commercial issues, there is the problematic Musk himself. Any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account. If, as here, a company or its principals do not actually agree with Musk’s extreme political and social views, then a potential brand affiliation with Tesla is even more issue fraught.”” Harsh, but true.

Variety

Cultural burning isn’t just important to Indigenous culture – it’s essential to Australia’s disaster management

Rekindling ancient Indigneous cultural burning is crucial to disaster management in Australia.

There are signs that Indigenous cultural burning practices are being increasingly accepted as a disaster mitigation process. Ancient techniques of gentle, regular burns were an important part of Indigenous land management; burning off fuel to reduce the chances of out of control bushfires and creating safe havens of burned land when needed. Colonisation and Indigenous dispossession radically altered the Australian landscape. Land clearing, irrigation, farming, “Even the creation of national parks transformed landscapes, as Western practices of more passive management replaced active Indigneous management”. Colonial intervention suppressed Indigenous fire practices but, even as Indigenous groups work to diligently and strategically rekindle this practice, it is also evolving as well.

La Nina rain patterns have prompted vegetation growth, increasing fuel loads, the spread of introduced and highly flammable buffel grass and increasingly volatile and combustible landscapes because of climate change, along with other factors, are stacking up. Signs point to a dangerous fire season with the Australasian Fire and Emergency Council’s seasonal bushfire outlook projecting “the risk of early fires and a higher-than-usual bushfire risk over vast areas of Australia.”

Australia needs to mainstream cultural burning through “sharing the knowledge of when and how to burn, and resourcing Indigenous groups to undertake training and burns. Doing this will not only benefit the land and Indigenous groups, but all Australians.”

The Conversation

Pressure mounts as Leo Burnett, Saatchi & Saatchi, Special Australia and more pull the plug on Campaign Brief

Ad industry trade publication blames its sector for an all male awards spread.

Embattled ad industry trade publication Campaign Brief continues to see fallout after its annual awards spread The Work 2024 featured no women. ⟨ While we’re talking about it, it’s a pretty white male list too ⟩ I admit I’ve only become aware of the issue this week, but in getting the background it’s disheartening to see Campaign Brief’s defensive and dismissive apology.

“At Campaign Brief, we take the concerns raised by the community seriously, especially regarding gender representation in the advertising industry. These LinkedIn discussions have sparked important conversations about the lack of women in senior creative leadership roles, and we agree that this highlights a broader issue within the industry,” he wrote.

“While the list reflects the current makeup of leadership in creative departments, it is not an endorsement of the imbalance. We recognise the significant contributions of women across all sectors of the industry – whether in management, media, account service, or production – and fully support efforts to increase female representation in creative leadership.”

However, in response to questions from Mumbrella, Lynch defended the list, saying it is reflective of the advertising industry as the rankings come from the credits supplied by agencies, and it is “not Campaign Brief’s fault”.

Given that, it is hardly surprising to see a bunch of ad agencies end ties with the trade publication this week. Diversity in every industry is important and it does exist.

Mumbrella


Add it to the pile

New additions to the unread pile:

Guidance on privacy and the use of commercially available AI products

OAIC also released separate guidance on the use of AI products.

Because the OAIC guidance on training and fine-tuning AI models was so comprehensive I didn’t get a chance to read this companion guidance document yet. That said, it looks at privacy obligations that exist when using a commercially available AI system.

OAIC


A bit on the side

Other tasty tidbits this week:

Major publisher Penguin Random House is adding words to the copyright notice in the front matter of its books declaring they cannot be reproduced in any manner for training AI. They are the first publisher ⟨ that I am aware of, anyway ⟩ that has taken this move, even if the enforceability of this and other AI opt outs remains contested.

Notion is getting closer to releasing its email product. While details are pretty vague I’m sure they are thinking about how their email tool will interact with their existing notes, calendar and AI tools. If it does what they are claiming it could genuinely bring in a complete rethink of how your inbox interacts with the rest of your productivity workflow. There’s a waitlist for access to it when it opens beyond preview.

Google Calendar on the web is a design overhaul, inviting dark mode ⟨ finally! ⟩ and a visual refresh to bring it into line with Material Design 3.

More to read

Of course, there’s lots of other stuff I have been reading that doesn’t make it into the weekly round up. If the long list is too much, I also group links into collections:

If you have a Google Account you can even share links with me.

Was this free blog post helpful?

If so, I encourage you to please show your support through a small contribution – it all helps me keep creating free arts marketing content.

Disclosure

AI use

This blog post was drafted using Google Docs. No part of the text of this blog post was generated using AI. The original text was not modified or improved using AI. No text suggested by AI was incorporated. If spelling or grammar corrections were suggested by AI they were accepted or rejected based on my discretion (however, sometimes spelling, grammar and corrections of typos may have occurred automatically in Google Docs).

The banner image (i.e. the first image at the top of the blog post) was generated by AI using Text to Vector Graphic (Beta) in Adobe Illustrator.


Credits

A pattern made up of a repeated icon of three books in a pile on top of each other. The top book has an orange cover, the middle book has a green cover and the bottom book has a yellow cover. The piles of books are on a purple background. The piles of books are on a bright orange background.

Image: A pattern made up of a repeated icon of three books in a pile on top of each other. The top book has an orange cover, the middle book has a green cover and the bottom book has a yellow cover. The piles of books are on a purple background. The piles of books are on a bright orange background. The icon is an adaptation of an vector graphic generated by Elliott Bledsoe using the AI tool Text to Vector Graphic (Beta) in Adobe Illustrator. Prompt: ‘Hand drawn pile of books simple lines’.


Provenance

This blog post was produced by Elliott Bledsoe from Agentry, an arts marketing micro-consultancy. It was first published on 27 Oct 2024. It has not been updated. This is version 1.0. Questions, comments and corrections are welcome – get in touch any time.


Reuse

Good ideas shouldn’t be kept to yourself. I believe in the power of open access to information and creativity and a thriving commons of shared knowledge and culture. That’s why this blog post is licensed for reuse under a Creative Commons licence.

A bright green version of the Creative Commons brand icon. It is two lowercase letter Cs styled similar to the global symbol for copyright but with a second C. Like the C in the copyright symbol, the two Cs are enclosed in a circle.A bright green version of the Creative Commons brand icon. It is two lowercase letter Cs styled similar to the global symbol for copyright but with a second C. Like the C in the copyright symbol, the two Cs are enclosed in a circle.

Unless otherwise stated or indicated, this blog post – (Un)read in the ledger: Monday 21–27 October 2024 – is licensed under the terms of a Creative Commons Attribution 4.0 International licence (CC BY 4.0). Please attribute Elliott Bledsoe as the original creator. View the full copyright licensing information for clarification.

Under the licence, you are free to copyshare and adapt this blog post, or any modified version you create from it, even commercially, as long as you give credit to Elliott Bledsoe as the original creator of it. So please make use of this blog post as you see fit.

Please note: Whether AI-generated outputs are protected by copyright remains contested. To the extent that copyright exists, if at all, in the icon I generated using AI or the banner image I compiled using that icon for this blog post (i.e. the first image at the top of the blog post), I also license it for reuse under the terms of the Creative Commons licence (CC BY 4.0).



Any thoughts?  ⌇ Leave a reply

Your email address will not be published. Required fields are marked *