Local Suno just dropped
Posted by Different_Fix_2217@reddit | LocalLLaMA | View on Reddit | 100 comments
https://huggingface.co/fredconex/SongBloom-Safetensors
https://github.com/fredconex/ComfyUI-SongBloom
Examples:
https://files.catbox.moe/i0iple.flac
https://files.catbox.moe/96i90x.flac
https://files.catbox.moe/zot9nu.flac
There is a DPO trained one that just came out, I saw no examples for that: https://huggingface.co/fredconex/SongBloom-Safetensors/blob/main/songbloom_full_150s_dpo.safetensors
-Ellary-@reddit
Here is short Info from my personal tests:
-It is 2b model (Ace-Step is 3.5b)
-You can't control style of music by text, only by short 10sec mp3 example.
-Don't follow instructions and notes inside prompt. (as Ace-Step or Suno)
-Runs on 12gb 3060.
-I'd say only 1 out of 100 tracks is fine, Ace-Step is around 1 out of 30, Suno is 1 out of 2-3 is fine.
For me it is a fun demo for the tech, but not real competitor even for Ace-Step.
PM_ME_BOOB_PICTURES_@reddit
mono is a good thing considering there is no audio noise diffusion model currently in existence that can generate stereo signal that isnt absolute garbage. Even suno is incredibly trash for that.
then again im a producer, and most listeners experience music as "singing with exciting stuff in the background", so I guess the bar is extremely low for most people
IrisColt@reddit
Thanks for the info, waiting then.
Numerous-Aerie-5265@reddit
How does it compare to YuE? That’s the best local music model out there now imo
EuphoricPenguin22@reddit
YuE < ACE-Step <= SongBloom, based on my experience. YuE has the nifty feature of closely following an input track with prompted vocals in its song input mode, which ACE and SongBloom seem to lack. ACE is generally more competent and higher quality than YuE, but it was released a few months after YuE came out. SongBloom, which I'm trying now, seems to have much higher quality output than both YuE and ACE, but it's frustratingly committed to turning everything into a pop song. It sounds almost like a real vocalist on top of a subpar AI backing track, which I mark as a halway improvement over ACE, but its total lack of controlability makes me feel ACE definitely has not been fully replaced.
-Ellary-@reddit
Sadly didn't use YuE, does it have comfyui support?
Numerous-Aerie-5265@reddit
It’s been out for a while, so I’m sure someone has made some comfy nodes for it. If you try it, make sure to use the exllamav2 versions on GitHub, the original takes like 15 mins for 30sec of audio, whereas exllamav2 version is around 1 minute wait for 30sec of audio.
-Ellary-@reddit
Got it ty!
Demicoctrin@reddit
Personally seems pretty slow on my 4070ti Super, but I haven't done any tinkering with ComfyUI settings
-Ellary-@reddit
Agree, Ace-Step is doing like 2min long tracks in 30 secs on 3060.
Demicoctrin@reddit
Exactly. Just wish Ace-Step had better vocal quality. I'm excited for the 1.5 model
Different_Fix_2217@reddit (OP)
They say the description guided one is supposed to come out soon. This is just lyrics / sample guided.
-Ellary-@reddit
Waiting then.
I've described my current exp.
Lemgon-Ultimate@reddit
I'm a bit sceptical about it, I trusted Ace-Step, the samples sounded good but as I generated a lot of music with it none of the songs were "good enough" to be enjoyable. Some had good parts but the instruments and vocals had no impact upon listening. I'd love to generate some cool Cyberpunk songs locally and still have hope but for now I remain cautious.
Curious_Soil9823@reddit
u/My_Unbiased_Opinion Generating Cyberpunk music with ACE-Step is possible. I've done it multiple times
Here's a GDrive folder with some stuff I generated. Drag it into ComfyUI to see the workflow:
https://drive.google.com/drive/folders/1p48E4k-MheTULCIAR0eQkUzw1EnBZupl?usp=sharing
If you need more, I can upload some more generated songs on Saturday, I'm just not at my PC right now
My_Unbiased_Opinion@reddit
Cyberpunk music would be dope. That's my dream too.
Mongoose-Turbulent@reddit
Quick question, are you able to prompt the voice and style at all? For example, male voice, rap style.
intermundia@reddit
tried the workflow and it doesnt seem to generate lyrics the instrumental is good but no lyrics
Tricky_Definition_87@reddit
Is it possible to finetune it ?
Qual_@reddit
Hey fellow smart people out there, since we're talking about local suno, Do you know if there is something that can transform an audio into another style ? I have a medieval themed birthday soon and I want to organize a blind test but medieval style. Well known music -> medieval version
Nulpart@reddit
You can do it with Suno (cover mode) but I don't think you can upload copyrighted song.
FriendlyUser_@reddit
i think that is a bit tricky to be honest. Lets say you have regular happy birthday and wanted to have it in the style of mozart. You would need to keep the basic song dynamic but also add in quite a few notes that would fit mozarts style and adapt it into the overal song. There are some musicians who do that like Lucas Brar (think he did happy birthday in 7 styles) but they will use their ear to get the perfect combination and write down the arrangement. If any llm is capable of that, id pay pro. 🤣
Different_Fix_2217@reddit (OP)
This model takes audio as a input to base its song on along with text.
_DarKorn_@reddit
Can I use it without audio input?
opi098514@reddit
Not as good as suno obviously but my god it’s getting there. Amazing for local. Stoked to see this go further.
PwanaZana@reddit
Even if local is always a year or two behind closed, local will eventually reach a good enough for most uses
spiky_sugar@reddit
The most interesting is how small these models are - considering their quality - SUNO very likely also probably be in this range max 7b models - which explains why they have such a generous paid and free tiers...
opi098514@reddit
Yah. I was thinking these models can’t be that large. TTS models are fairly small. Obviously adding music and pitch and everything adds tons of complexity but it’s no where near the same complexity of thinking models. So in theory these things should be able to be used on most local systems. It’s awesome. I already enjoy listening to my own music that I wrote but never had the ability to sing or produce with Suno. Now it’s getting even easier and cheaper.
-dysangel-@reddit
Yeah, wow! The music itself sounds great to me - I could see using this to generate passable generic background music for a game no problem. Lyrics style/sound seem exactly the same as Suno so I think I'd just give that a miss for now unless it's for joke songs
madaradess007@reddit
games are like 50% music and sounds, this game you would add generatd passable music to will suck donkey ass and wont be addictive
Ylsid@reddit
You're right, I'm not interested in playing something that hasn't been well crafted and few people are
-dysangel-@reddit
I said generic background music, not all the music. I'm very interested in good sound design, but this level of quality seems fine for generating generic village/shop ambience type of stuff
Paradigmind@reddit
Tell me that you have no clue about Suno without telling me.
WyattTheSkid@reddit
I wish these ai music companies would do something with MIDI. I feel like that would be a lot more useful
Tiny_Arugula_5648@reddit
Well it's been 9 years now.. https://magenta.withgoogle.com
NoLeading4922@reddit
check out https://huggingface.co/loubb/aria-medium-base
Sea_Revolution_5907@reddit
Yeah it'd be great to have it as a plugin in a DAW.
kaleosaurusrex@reddit
That’s just text and you can do it right now
NoLeading4922@reddit
How does this compare to ace-step?
Flaky_Comedian2012@reddit
Much better audio quality, but cannot prompt it using text. All you can do is give it some reference audio and lyrics and instrumental tags and hope for the best.
NoLeading4922@reddit
In terms of musicality do you think it performs better than Ace-step?
ArchdukeofHyperbole@reddit
Can the model be ggufed?
nntb@reddit
How does it compare to ace?
pumukidelfuturo@reddit
Its was Suno was one year ago. Probably next year we have something we can actually use with "good sound quality". Good starting point. Needs lots of refinement.
Muted-Celebration-47@reddit
It's not close to the latest version of SUNO. But I think It can compare to the first version of SUNO.
fish312@reddit
The common thing between YuE and AceStep and the other dozens of forgotten text to music models is that they don't care about llama.cpp.
Hopefully this time will be different, but I wouldn't hold my breath.
EuphoricPenguin22@reddit
Maybe I'm missing something, but why would you want that? For image, video, and audio generation, support with ComfyUI is generally considered the gold standard. I could understand if it was a robust language-first model with multi-modal capabilities, but this is only a music generation model with multi-modal inputs.
fish312@reddit
Comfyui is massive, complex and full of dependencies. I want something lightweight
sleepy_roger@reddit
They work in Comfy generally though which is nice.
_raydeStar@reddit
They provided comfyui support and that's huge, honestly. Now I can just pop it in instead of running some gradient thing they set up last minute.
Danny_Davitoe@reddit
Not including a Readme.md with a description of your model should be a criminal offense.
TheRealMasonMac@reddit
https://github.com/Cypress-Yang/SongBloom
90hex@reddit
OMG this is sick. Thanks for posting bro. How do you think it compares to Suno 4.5+, especially for vocals?
Different_Fix_2217@reddit (OP)
Obviously not quite there but it is catching up extremely quickly. This is crazy for something running on my computer and blows away everything before it.
spawncampinitiated@reddit
How does it go about generating short samples for further manipulation in DAWs?
90hex@reddit
It will only get better. Can’t wait to see what comes after. In the mean time let’s enjoy our unlimited, free and local models.
Ok_Appearance3584@reddit
Sounds mono to me. Useless.
rkfg_me@reddit
It's stereo but it begins with the fragment you upload, and that one is definitely mono.
Flaky_Comedian2012@reddit
It is not mono. It just has bad stereo separation on instruments in general, like early Suno models. Some generations has more separation than others. With headphones you can more easily hear it and then when looking at the waveform at those spots you will see there are some differences in the waveform between the right/left channel.
mycall@reddit
Just use the loudest speakers you can get.
drifter_VR@reddit
Opened one of the .flac files in Audacity to confirm. Yep it's mono.
Tall-Animator2394@reddit
Smile_Clown@reddit
Ok, weird stuff. Reference audio sometimes gets integrated.
I tried an artists song, it stuck the intro in completely, then did a pretty good job. This cloned his voice pretty well also which might actually be a problem if you think about it even aside from copyright issues.
Overall, needs work, when I added an instrumental of he same song, the lyrics I created went all wonky and bounced in between what it should be and lyrics that were not there.
Needs a bake, or at least the text to music model.
cool though!
Flaky_Comedian2012@reddit
You might get better results if you change the generation length as well as the are within the reference song you are sampling. I don't know if it is just a coincidence, but if i am not writing [verse], [chorus] and other instructions in lowercase, then I get much worse results. According to documentation only [intro], [outro], [inst], [verse] and [chorus] is accepted as tags for lyrics.
seoulsrvr@reddit
Is it possible to restrict the model to straight instrumental or even percussion generation?
Flaky_Comedian2012@reddit
I have not tried it myself, but according their github you can do that by giving it [inst] tag instead of [verse[ and lyrics. Sadly cannot customize it more than [intro[, [inst] and [outro].
But I guess if you give it a sample with the sounds you want you have a chance of getting them.
caetydid@reddit
one could spend hours playing with that
gtderEvan@reddit
That’s what she said.
NoLeading4922@reddit
How does this compare to ace-step?
That-Thanks3889@reddit
but gojng to keep getting better
That-Thanks3889@reddit
amazing
Green-Ad-3964@reddit
is an input audio always needed?
ihaag@reddit
ACE-STEP is still the closest open source we have to Suno or Riffusion
Green-Ad-3964@reddit
is there a workflow for this?
StyMaar@reddit
I've just listened to the samples. The sound is atrocious, how many de-noising steps are missing here?!
cr0wburn@reddit
Can this also do text to song without mp3 import? or is it just song "cloning"
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
Ulterior-Motive_@reddit
Any spaces/other online demos?
AppearanceHeavy6724@reddit
I did not expect music to be solved first by GenAI.
martinerous@reddit
English is quite nice. Of course, it totally screws up Latvian, so I had some entertainment out of torturing it and laughing :)
It has a tendency to start with the exact clone of the sample song and then it gradually deviates from it, often reducing the number of instruments. Drums and voice is enough, it decided :D
ffgg333@reddit
Can you train loras on it? How much vram to train ?
Freonr2@reddit
Training of any model you can already download and run inference on isn't really a huge challenge in itself, so I don't see why not.
Finding good guidance on settings, data, etc. and trying to appease everyone with an 8GB GPU is the larger challenge.
seoulsrvr@reddit
Anyone have an idea how how it compares to Meta's musicgen/audiocraft setup?
nakabra@reddit
Wait, isn't Songbloom like... several months old? I have it installed in my machine like a long time ago. Don't really use it, though. Getting good music from those models is like hitting the jackpot in a slot machine.
Different_Fix_2217@reddit (OP)
the dpo one just came out
Sea-Tangerine7425@reddit
Can anyone tell me if this includes their encoder/decoder as a discrete component? I'm not interested in their actual backbone as I have spent years developing my own pretraining and data pipeline for that very task, but the current state of open source encoder/decoder models leaves more than a lot to be desired and it would be nice to plug something better into my current setup.
s101c@reddit
The FLAC links don't work for me.
Languages_Learner@reddit
I wish it could be adapted to gguf format...
ddrd900@reddit
How much VRAM does it need to run?
Dany0@reddit
with default config (250 seconds), 10gb ish it seems
BuildAQuad@reddit
Looks like somewhere around a minimum of 10 GB after a quick look. But I don't know for sure.
ddrd900@reddit
I am trying with 8Gb with no luck, but I believe it's very close. 10 Gb makes sense, and I am pretty sure 8Gb are feasible with some optimization (or with fp8 quant)
BuildAQuad@reddit
Yea, I'd assume the model is 16bit? Didnt check
opi098514@reddit
How much you got?
More than that.
akefay@reddit
Someone in the ComfyUI sub said it works on their 16GB, and uses under 12GB (for the songs they've generated at least).
Ok-Adhesiveness-4141@reddit
Yes.
ShengrenR@reddit
That third example - Norah jones? I'd put money on it..
sleepy_roger@reddit
I'm a simple man, when I see audio models drop I download them immediately before they get "Microsoft'd"
Different_Fix_2217@reddit (OP)
This was feeding it the start of Metallica fade to black and some claude generated lyrics
https://files.catbox.moe/sopv2f.flac
Aaaaaaaaaeeeee@reddit
Having not been caught up to new music models (diffusion/llm/other) do you know if there's any new feature impossible to do YuE's EXL2, i used this one before: https://github.com/alisson-anjos/YuE-exllamav2-UI
For example remixing songs?