Seed-Music
Seed-Music is a unified framework for high-quality and controlled music generation and editing, capable of producing vocal and instrumental works from multimodal inputs such as lyrics, style descriptions, sheet music, audio references, or voice prompts, and of supporting post-production editing of existing tracks by allowing direct modification of melodies, timbres, lyrics, or instruments. It combines autoregressive language modeling with diffusion approaches and a three-stage pipeline comprising representation learning (which encodes raw audio into intermediate representations, including audio tokens, symbolic music tokens, and vocoder latents), generation (which transforms these multimodal inputs into music representations), and rendering (which converts those representations into high-fidelity audio). The system supports lead-sheet to song conversion, singing synthesis, voice conversion, audio continuation, style transfer, and fine-grained control over music structure.
Learn more
SFX Engine
Discover the power of our AI sound effect generator, designed specifically for audio producers, video editors, and game developers. Our AI sound effect generator empowers you to craft custom audio experiences that resonate with your audience. With endless possibilities, you can easily design the perfect sound for any project, whether it's for film, gaming, or music production. Fine-tune every sound effect with detailed text descriptions, allowing for precise customization to suit your needs. Our pricing is simple and transparent, with no hidden fees or charges. Purchase as many credits as you need, no subscription necessary. Generate any sound effect with infinite variations. Pay only for the sound effects you need. All commercial use is included by default. Every sound effect you generate is licensed for commercial use, with no additional fees or royalties. Use them in your projects without worry.
Learn more
MusicGen
Meta's MusicGen is an open source, deep-learning language model that can generate short pieces of music based on text prompts. The model was trained on 20,000 hours of music, including whole tracks and individual instrument samples. The model will generate 12 seconds of audio based on the description you provided. You can optionally provide reference audio from which a broad melody will be extracted. The model will then try to follow both the description and melody provided. All samples are generated with the melody model. You can also use your own GPU or a Google Colab by following the instructions on our repo. MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models. MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better control over the generated output.
Learn more
OpenAI Jukebox
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artistic styles. We’re releasing the model weights and code, along with a tool to explore the generated samples. Provided with genre, artist, and lyrics as input, Jukebox outputs a new music sample produced from scratch. Jukebox produces a wide range of music and singing styles and generalizes to lyrics not seen during training. All the lyrics below have been co-written by a language model and OpenAI researchers. When conditioned on lyrics seen during training, Jukebox produces songs very different from the original songs it was trained on. We provide 12 seconds of audio to condition on and Jukebox completes the rest in a specified style. We chose to work on music because we want to continue to push the boundaries of generative models. Jukebox’s autoencoder model compresses audio to a discrete space, using a quantization-based approach called VQ-VAE.
Learn more