We can help with:
Accelerate your career
Join us today
Book an event
A unique 3-minute opportunity to pitch to publishers & agents
Do you want to know how to nail your pitch to publishers?
We advocate for members
Stay in the know
Stay in the know with industry news and ASA views.
Search our resources
Find ASA members
FAQs on publishing and more.
Member only guide to the Australian book industry.
Artificial intelligence has become a hot topic with the rise in new automation programs like Midjourney and ChatGPT, which enable users to generate creative work at the click of a button. Is this a fun tool? A threat to creators’ livelihoods? A breakthrough in synthesizing data? What do you need to know? While there is much yet to be understood about this evolving technology, what’s clear is that AI presents complications on a range of fronts, from copyright ownership to the very idea of originality and creativity.
We’ve canvassed some concerns of authors’ groups, publishers and artists around the world about the impact this technology may have on the creative industries, to get you up to speed with the current conversations about artificial intelligence.
Artificial intelligence art-generation tools use machine learning to create images or written work based upon a user’s inputs. For example, using DALL·E you could type in a detailed description of the image you want – say, a cat watching a sunset in the style of a Picasso painting – and the program will generate that image for you in moments. Or, you could ask ChatGPT to write you a ‘get well soon message in the style of Nick Cave.’
These programs are able to generate images or text by using algorithms based upon databanks they are fed, which typically involves scraping whole libraries of artwork, music and content from across the web. Not all designers of AI applications are open about the content they are using to train these tools, and it appears they have not sought consent from the artists or copyright holders to use their IP for this purpose.
While some are extolling the time-saving potential of AI applications – describing them as tools to help visualise scenes, to brainstorm new ideas, or create marketing copy, as outlined in Jason Sanford’s comprehensive summary of what AI means for authors – there are many areas of concern raised by this technology.
Most importantly, if machine learning is operating through the absorption and reproduction of many, many original creative works, it is exploiting the work of artists and writers. Our view is that it’s essential that the copyright owners of these works have a say – and potentially a fee – for their art to be used in this way.
A key question is whether AI could threaten writing and illustrating jobs. Are cost-saving corporations likely to replace copywriters, journalists or graphic designers with machine learning programs, particularly as the tools become more advanced?
There’s been buzz among indie authors about using AI tools to generate cover imagery – will some publishers begin to outsource this work as well? Science fiction and fantasy publisher, Tor Books, is already due to release a book with a jacket image generated by Midjourney.
If tools like ChatGPT are freely available, and it’s possible to generate cogent text at the click of a button, it is possible to create books in mere moments. We’ve already seen an uptick in emails promising to help aspiring authors write a book outline in a day or write a novel in a week, using AI-tools to deliver a quick plot, starter text and more. Several AI-assisted authors have created a book in a day with the help of AI tools and managed to upload and sell these books on Amazon.
While low quality books won’t sell well, they may contribute to a flooding of the market with AI-written texts. While at the moment AI-generated writing is clumsy and fairly easy to spot, an avalanche of these books may make the challenges of discoverability and dilution of audiences even tougher for professional writers.
That’s not to mention ChatGPT has no filter for accuracy, and does not distinguish between reliable and unreliable sources of information. Are we likely to see a significant rise in misinformation or an amplification of discriminatory content?
And how might literary journals be affected by submissions written by machines rather than humans? An article penned by Neil Clarke, editor of literary magazine, Clarkesworld, went viral after an overwhelming increase in the number of spam AI stories being submitted forced the journal to temporarily suspend submissions. While Clarke noted that currently AI-generated stories were easy to detect, and rejecting and banning submissions is simple enough, “it’s growing at a rate that will necessitate changes.” Third-party detection tools are costly, particularly for small, underfunded literary organisations, and limiting submission windows or seeking solicited submissions will increase barriers for new authors.
It appears this problem has not reached Australian shores just yet. Fiction Editor of Overland and UTS Creative Writing Lecturer Claire Corbett says, “No AI short stories have been submitted to Overland yet but of course that could and likely will change. We’ve not even had a chance to discuss what we think about this yet.
“I think, despite Barthes’ quite reasonable contentions about the death of the author, that with fiction especially, the author’s intention really does matter to most readers and just knowing that the production of a work of fiction was made with intent by another human is of critical importance to readers, even or especially if they’re not 100% sure what those intentions are. There is an excellent recent essay by Rob Horning on some of these issues in Overland.”
Editor of Meanjin, Esther Anatolitis confirmed the journal has not seen an influx in AI submissions either, saying, “Meanjin didn’t have this problem, thankfully!, and that might be because we’re very interested in the writers we publish: their cultural context, their professional practice, where they are in their careers and so on. Yes, there’s a great Australian tradition of literary hoaxes, but we’re not too concerned about being fooled by AI; the risk here is more about a writer undermining their own career trajectory.
“I’d love to see writers play with AI formats – I’d welcome experimental pieces that highlight the particular characteristics and tropes a writer observes in playing with ChatGPT, for example. We need playful ways of exploring what AI is going to come to mean for journalism, for professional writing, for the “content” we see on websites – you know how I hate the word “content”, but maybe AI is going to be the creator of the content I’ve long abhorred! So we need to understand it better, right now.”
Are students likely to take up AI tools to write essays and assignments? Many Australian state education departments have banned ChatGPT in schools. In a New Yorker essay Ted Chiang writes, “Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something we’ve never read.”
Claire Corbett says, “I’m less concerned about my students using AI to write their assignments than perhaps most other sections of the academy because we use developmental processes and workshopping over time and more importantly most of our students really want to express themselves and develop their craft and their potential. Will this be an issue in future? Certainly, especially in undergraduate programs.”
Then there’s the question of who owns the copyright of works generated by AI? And should the role of AI tools be transparently stated and acknowledged? The US Copyright Office has already had to grapple with this issue when it reconsidered the copyright registration of Kristina Kashtanova’s comic book, Zarya of the Dawn, which featured images created with Midjourney. Kashtanova’s copyright registration for those images were cancelled as they are “not the product of human authorship”.
While questions about technology, originality, and the value of art have been meditated on for a long time, the advancement and accessibility of these tools has brought these considerations to the forefront once again.
Author Alan Baxter says, “Being good at any kind of art takes a lot of dedication and practice, a lot of try and fail, a lot of soul searching and perseverance. AI removes all of that. It’s “creativity” without the work for people too entitled to make the effort. It’s art without craft. These are people who want to call themselves writers or artists without putting in the time to learn and develop a voice or style of their own. They would rather let computers trawl all the art made by other people and homogenise it into something bland and soulless and then call that an achievement.”
Meanjin Deputy Editor and Fiction Editor, Tess Smurthwaite echoes the focus on voice, “AI approaches a writing task with a specific purpose, while in the writing that interests us, the purpose comes through in the reading of the piece. It’s the voice of the writer that’s most interesting.”
Another key question should also be asked: why create tools to replace people in tasks they enjoy doing anyway?
“In a world where people are still cleaning toilets and working in mines, I can’t believe we’ve got the robots making our art and stories. I thought robots were supposed to do the shitty jobs to allow more people to pursue their passions. AI is simply a hideous and well-focussed encapsulation of capitalism at the expense of humanity, as usual,” says Alan Baxter.
Artists, authors, narrators, arts organisations and companies have started fighting back through a variety of means.
Lawsuits have been filed; Getty Images has commenced legal proceedings against Stability AI, developer of Stable Diffusion, for using its images without payment, artists have launched class-action suits against them too, and computer programmers have filed a claim against Microsoft, GitHub and OpenAI claiming the AI tool, CoPilot, relies on “software piracy on an unprecedented scale.”
A group of artists have launched Have I Been Trained? – a website which allows anyone to check if their artwork has been used to train AI. Now a growing number of creators can discover and choose to opt out of having their work included in the training datasets used for AI applications – although the process to do so is not always simple or clear. Some creators have argued the process of opting out contravenes the European General Data Protection regulation, whereby consent cannot be assumed by default.
In the audio space, authors and narrators, supported by SAG-AFTRA, were up in arms about Findaway Voices, an audiobook distributor owned by Spotify, providing Apple with access to some of its audiobook files to train their AI-narration tools – tools which threaten their livelihoods. Findaway Voices and Apple have recently agreed to halt any/all use of their files for machine learning purposes however, Writer Beware has reported authors are still finding machine learning clauses in the Findaway distribution agreements.
In the writing world, the US Authors Guild has drafted a new model clause for publishing and distribution agreements which prohibits the author’s copyright works to be used for the purposes of machine learning without permission – a clause which authors and agents can request to have added to their contracts.
While clauses such as these have not been adopted in Australia for now, the complications that AI presents are on the radar of Australian publishers.
Dr Stuart Glover, Policy and Government Relations Manager at the Australian Publishers Association, says, “We’re looking closely at AI – it’s a complex issue for both trade and educational publishers, that could deeply affect the future of learning as well as creative and artistic work. Publishers are working through what it means for their authors, illustrators and titles, and we’re not aware of any companies changing their contracts just yet. As an industry, we’re looking at how we adapt – and it’s heartening to see the Attorney-General prioritise AI in the upcoming copyright consultation, encouraging serious conversation about the implications of this technology.”
Alongside attention from the Attorney-General’s Department on this issue, Australian Federal MP Julian Hill has called for a white paper or inquiry into the potential impacts of artificial intelligence applications.
The ASA is monitoring local and international discussions about AI tools, and will keep our members updated as the situation develops.
ASA CEO, Olivia Lanchester, says, “At present, we are deeply concerned about the large-scale scraping and exploitation of digital works without regard to creators’ rights. AI stands firmly on the shoulders of hundreds of thousands of preceding creators who poured hours of labour into their work, many of whom still enjoy a monopoly over their works afforded by copyright law. That monopoly ought to be respected. In our view, new technologies ought to be enabled by licensing rather than appropriation.”
We look forward to meaningful conversations with the Attorney General’s Department about artificial intelligence in upcoming roundtable meetings.
If you’d like more information about artificial intelligence and copyright, the Australian Copyright Council will be hosting an online information session on Tuesday 28 March.
“It’s possible that, in the future, we will build an A.I. that is capable of writing good prose based on nothing but its own experience of the world,” Ted Chiang writes. “The day we achieve that will be momentous indeed – but that day lies far beyond our prediction horizon.”
If you are seeking advice, or would like to share any feedback and concerns about this issue please use the ASA’s free member Advice service or contact Lucy Hayward, Marketing & Communications Manager, at [email protected].