How does AI overturn the traditional entertainment industry?, From Hollywood to Universal Music Industry | Artificial Intelligence | Traditional
Director Nolan said, "AI experts are facing the 'Oppenheimer' moment." Oppenheimer is Nolan's latest directed film, which will be released this week. The protagonist of the film, Oppenheimer, calls for international control over nuclear weapons; In the eyes of some industry experts, the potential threat of artificial intelligence can be compared to nuclear weapons.
However, Nolan believes that for the film industry, generative artificial intelligence will create "huge opportunities" in fields such as visual effects, and AI will become a powerful tool in the entertainment industry.
Can AI replace actors and singers? Technically implemented
At the time of the premiere of Oppenheimer, Hollywood is on strike due to the threat of "AI synthetic actors". Although today's concern has not yet occurred, actors have felt a real risk that their roles may one day be replaced by AI digital humans.
Film production studios have been able to capture the portraits of actors through methods such as 3D human scanning, thereby creating generative AI synthesized digital humans. This type of image can be used for post production to accurately replace the actor's face or create a movie stand in.
Post production companies for movies are most inclined to use AI technology, and one of the biggest benefits of using this technology is that it can facilitate digital modifications to post production, such as replacing dialogue text or quickly changing digital clothing, which can save hundreds of thousands of dollars in the cost of re shooting scenes.
But the Screen Actors Guild of America warns of potential "overstepping" behavior by artificial intelligence and urges production companies to obtain permission before making any changes to the image, portrait, or voice of actors.
The influence of AI on the entertainment industry has also extended to the music industry beyond Hollywood. The recently popular "AI Stefanie Sun" in China is a proof. By training and post-processing the model, the AI synthesized system can mimic the voice of singer Stefanie Sun's cover songs.
The biggest impact of artificial intelligence on music is that it makes creation easier, which will also bring profound changes to the music industry.
Recently, at a themed forum on "Exploring the Future of Music Creation" jointly organized by NetEase Cloud Music and Tianqiao Brain Science Research Institute, Dai Shuqi, a doctoral student in Computer Science at Carnegie Mellon University in the United States, introduced that it is not easy to automatically generate a piece of music. In addition to melody generation, it is also necessary to consider elements such as multi track accompaniment generation, lyrics generation, orchestration, and arrangement. In the past, these processes relied on the composer's inspiration, reasonable arrangement, and personal experience to complete. Nowadays, AI technology can also be used to process these complex processes and automatically generate a piece of music.
Dai Shuqi believes that the application of generative artificial intelligence in the field of music mainly includes three types: first, symbolic level score generation, such as melody generation, multi track accompaniment generation, lyrics generation, orchestration, arrangement, etc; The second is audio level generation, such as vocal synthesis, instrumental synthesis, performance control synthesis, audio end-to-end composition, etc; The third is multimodal generation, which is mainly based on rule-based templates, traditional machine learning models, and deep learning large models.
In May of this year, Google released the music creation tool application MusicLM, which is an application that can create music based on written commands; Previously, the AI music generator Boomy reportedly created over 16 million original songs.
But the traditional music industry is cautious about the expansion and application of artificial intelligence. The world's largest record company, Universal, advocates applying copyright to data used in machine learning, such as original sound materials used to train computers for sound imitation, which should also have copyright.
Music recommendations rely entirely on AI algorithms
Jonathan Tapling, Honorary Director of the Annenberg Innovation Laboratory at the University of Southern California, opposes the use of generative artificial intelligence in the entertainment industry. He is a film producer and writer. Taplin believes that generative artificial intelligence can theoretically reuse all video content.
"The way large technology companies train their models is to ingest all the content on the Internet, regardless of copyright." Taplin said in a recent interview with MIT Sloan Business School: "Google has a music generating AI that trains every audio file on YouTube. For example, you can send a prompt to write me a song that sounds like Taylor Swift, a sad ending, or a fast-paced song, and the final song sounds a bit like her. In this way, someone can add it to the scene of a video game or movie for free."
Streaming recommendation is one of the application directions of AI in music. Due to mathematical relationships supporting the arrangement of notes in music, music is particularly suitable for the development direction of generative artificial intelligence tools. By training models, these AI tools can be taught to recognize and create melodies and rhythms.
Spotify and other streaming platforms have used artificial intelligence to draw user listening habits, analyzing a person's taste based on the rhythm or emotion of their favorite music. Deep learning algorithms can create personalized recommendations for other songs and customize playlists for users.
"The music structure based on repetition is an important feature of music, and music also has the characteristics of multi-level, diverse techniques, and high logicality. Dai Shuqi pointed out that" the repetition and structure of music can affect people's expectations of music when listening to it, thereby further affecting people's emotional reactions to these expectations. Based on this, algorithms can be used to analyze the repetition structure and expected perception curve of music, and personalized customized music can be generated by imitating the style of music that users are familiar with and like. "
Last summer, Spotify spent nearly $100 million to acquire the generative artificial intelligence company Sonatic. Based on Sonatic's algorithm, Spotify launched a private AI DJ in February this year to play songs for users.
Recently, NetEase Cloud Music also launched a beta version of a private AI DJ, which is based on an upgrade of the song recommendation algorithm and can provide personalized song recommendations.
Some people are concerned that AI creating and recommending music may make future generations more "mechanized", violating their pursuit of the essence of art. Taplin believes that the biggest challenge facing the entertainment industry is "too many formulas" and "lack of originality.". "Generative artificial intelligence can only generate more formulaic content. Entertainment relies on new ideas, while AI technology cannot generate new ideas," he said.
But some industry insiders also believe that people's concerns about AI may be exaggerated. Currently, AI has not yet developed to the point of having independent and autonomous consciousness, let alone replacing musicians or industry practitioners.
"Currently, AI is still focused on learning from existing materials and experiences to help practitioners improve work efficiency and achieve cost reduction and efficiency improvement," said Jiang Han, a senior researcher at Pangu Think Tank. He believes that from the current application in the industry, AI technology can not only be used for music creation and production, but also for automation and special effects optimization in music performances.
Some industry insiders believe that AI music creation may also play a greater role in medical and other fields, such as improving people's lifestyle and even helping to enhance the treatment effect of brain diseases.
Associate Professor Lu Jing from the School of Life Science and Technology at the University of Electronic Science and Technology of China told reporters that AI can be used to create an art form called "brainwave music". When hearing music, the reward system of the human brain is activated and dopamine is secreted, making people feel happy. He gave an example that during tooth extraction, if the patient listens to brainwave music, the analgesic effect will be better than cognitive-behavioral therapy; In addition, brainwave music also has a sleep regulating effect, and slow wave sleep music has a better sleep promoting effect.