![There are fears generatve AI, such as that created by OpenAI, could kill certain industries. (AP PHOTO) There are fears generatve AI, such as that created by OpenAI, could kill certain industries. (AP PHOTO)](/images/transform/v1/crop/frm/silverstone-feed-data/931eb0f3-f934-4a49-a549-fc9362a2d55b.jpg/r0_0_800_600_w1200_h678_fmax.jpg)
Australia urgently needs to restrict the use of AI technology to protect workers, an inquiry has heard, amid warnings it's already being used to cut creative jobs.
Subscribe now for unlimited access.
or signup to continue reading
The call came after an actor told an inquiry on Tuesday his employment contract was cancelled and his voice cloned to complete a video production.
But the Senate inquiry into Adopting Artificial Intelligence also heard Australian businesses should create their own AI models to address problems with the technology and ensure national guidelines were met.
The inquiry's third public hearing saw representatives from the Media Entertainment and Arts Alliance call for legal restrictions on AI technology to ensure people employed in creative roles were compensated for their work.
The union's campaigns director Paul Davies says the union isn't opposed to the use or development of AI tools but wants laws to ensure greater transparency about the data it uses, when AI is used, and that creators are paid for their work.
"Our position is that no human product ... should be used without recognition (or) should be used without compensation," he said.
Voice actor Cooper Mortlock told the inquiry his work on an animated series was cut short in 2022 when producers used an AI tool to clone his voice without his knowledge or compensation.
"When we reached about episode 30 of the promised 52 episodes, our producer cancelled the contract, saying 'we've decided to discontinue making the series'," he said.
"A year later, after the contract had finished, they released another episode of this series using what was obviously an AI copy of my voice and the other actors' voices."
Mr Mortlock said the company initially denied using AI technology but later clarified that his employment contract allowed them to do so.
"This contract was written and agreed on before AI voices and AI voice replication was as sophisticated and as prevalent as it is so we had no way of knowing that this could have happened," he said.
Digital Rights Watch founder Lizzie O'Shea said the example showed Australians needed greater protection for their personal information, including voice, likeness and biometric data, and also needed more restrictions around individual consent.
"It is clear that our laws are decades out of date," she said.
"There has to be structural interventions that limit the use of personal information and seek to put limits on data-extracted business models."
But Anton van den Hengel from the University of Adelaide said local restrictions on the use of AI may have a limited effect as AI technology was created and governed by overseas firms, most of them from the US.
"Us making regulations and so on is of really very little impact because we are not an active participant in this space," he said.
"The only way to have a say in what happens globally in this critical space is to be an active participant."
Professor van den Hengel said the nation had been slow to embrace AI but local researchers and businesses were "very well placed" to create a sovereign AI model.
"The first step is that we should build our own language model," he said.
"This is critical in order that we might have a language model that captures Australian values and that leverages that trust."
The Senate inquiry is expected to issue findings on the opportunities and impacts of AI in September.
Australian Associated Press