The impact of AI on in-house agencies

While in-house agencies need to put processes in place to guard against the risks of AI, its use could unlock their unique creative potential


As with all new technologies, when it comes to AI, regulations and best practice are lagging behind what is happening at the coalface. The majority of in-house agencies (IHAs) are engaging with AI in some way – whether it’s for briefs, concept visuals, or transcreation and versioning. But they are doing so without universal agreement on the rules of the game.

At a recent In-House Agency Leaders Club event, lawyers Amelia Maher and Jamie Smith from the media law firm Sheridans provided some expert analysis of the risks and opportunities around the use of AI. I’ve pulled together some key insights and advice from the session.

Firstly, there is little current consensus about best practice and regulation when it comes to AI. The UK Government has undertaken a process of consultation but that will take time and may not even be relevant to the creative industry once it happens. Similarly, although the likes of the Centre of the Picture Industry have published some guidelines, there is, as yet, no formal industry-wide guidance from major organisations that has been widely adopted.



In the light of this, every brand and, by extension, IHA needs to establish its own ethical, legal and practical framework for the use of AI. It is almost certain that any commercially available AI tool will have been trained using copyrighted images or other data. Getty Images, for example, is suing AI tool Stable Diffusion for allegedly using millions of its images without permission to train its system.

Therefore, using any of these tools opens up a brand to potential risk. That risk can be mitigated in a number of ways. If you are using AI to make a fairly generic image of, say a mountainous landscape, the risk of any copyright-holder coming after you for copyright infringement would be low. However, if any images generated by the tool appear similar in style to that of a particular artist or photographer, the risk is much higher.

Brands can lower their risk by training their own systems using assets that they own, or at least are sure that they have the appropriate rights to. But this can be complex – commercials, for example, might feature talent which a brand only has rights to use for a limited period, while most images are licensed rather than owned outright. And users need to be aware of the terms and conditions of the platforms they are using, and who owns the inputs and outputs.

IHAs also need to keep detailed records of the prompts used for AI tools, whether by them or by external partners such as agencies, production companies and post-houses. Evidence that there were no prompts used in the process that may have introduced ‘bias’ – such as asking for images which look like the work of an artist, or a car which looks like those of a rival manufacturer – will help guard against any future complaints. It’s also highly likely that insurers will ask for such documentation, and evidence that brands have credible policies in place regarding the use of AI, as a requirement for production cover in future.



In terms of who should bear liability for any fall-out from the use of AI, there are worrying stories that this is being passed down the production food chain to those with the least ability to take it on. There have been reports that agencies and brands are expecting production companies and post-production facilities to solve all the legal and regulatory challenges in briefs that involve AI, and take on any liability arising from them into the bargain. This is clearly not sustainable.

Brands and IHAs also need to be aware of the confidentiality risks around the use of AI tools. Uploading images of yet-to-be-launched products – a new model of car or mobile phone, for example – runs the risk of those images being leaked to rivals, or of images derived from them being reverse-engineered to reveal the original product image. And that doesn’t just apply to images: in April this year, Mashable reported that Samsung employees had accidentally shared confidential information while using ChatGPT to check source code.

There have been reports that agencies and brands are expecting production companies and post-production facilities to solve all the legal and regulatory challenges.
In addition, IHAs are having to manage both the expectations of the business regarding AI’s potential, and the concerns of their team members. Some managers see AI as a gateway for cost savings and efficiencies – no doubt it will be, but given the complications around its use detailed above, its impact may not be quite as dramatic as hoped for. Employees, on the other hand, may be justifiably concerned about what AI will mean for their jobs and we shouldn’t downplay the very real impact AI is likely to have here.



But let’s not be too gloomy. There is considerable upside for IHAs when it comes to AI. It has undoubted value at every stage of the creative process. IHAs report using it to generate and refine briefs, to create multiple concepts which can be further refined by humans for concept visuals, storyboarding and experimentation. But for IHAs, it may be further along the process where AI really has a beneficial impact.

IHAs are typically responsible for a great deal of lower-tier, so-called ‘churn and burn’ work. There can be a huge amount of versioning, adapting and reformatting going on, resulting often in bored, unmotivated staff who long to be doing more creative work. Unlike external agencies, their business model does not rely on maximising hours worked. If AI systems such as Medialake can automate the mundane, cut wastage, and free up time, people and budget to concentrate on bigger business challenges, their impact could be transformationally positive.

As for the legal threat to AI platforms, it appears most likely that a spirit of ‘if you can’t beat them, join them’ will prevail, so long as adequate recompense for copyright holders and creators can be built in. Last month, Getty Images announced that it is partnering with Nvidia to launch Generative AI by Getty Images, a new tool trained only on Getty Images’ own library, giving users full copyright indemnification. It seems likely that we will see other holders of large collections of copyrighted images and text following suit, making an incredible array of resources available for creatives to experiment with safely.

These tools are not going away: with the right safeguards in place, IHAs can use them to their advantage. We are seeing an evolution in IHA models as cost reduction and speed become a given and the focus switches to creativity. As former Verizon CCO Andrew McKechnie told the ANA’s ‘Continued Rise of the In-House Agency’ report this year, “the area for greatest potential for a successful in-house agency is in its ability to build talent and creative capabilities that have a positive impact on brand growth and differentiation”. AI can help in-house agencies to do just that.

in association with
Headline partner