Not impressed with Reflection Llama 3.1 70B IQ2_S quant.
Posted by LaughterOnWater@reddit | LocalLLaMA | View on Reddit | 14 comments
Win10/64GB | RTX 3090 | Reflection Llama 3.1 70B 1Q2_S
This is a conversation I had with Reflection below. I also had other conversations around the Hubble Constant and a definition that is timestamped in the web-o-sphere, so it ends up being a good way to determine how old the data are that was used to bake the model. Except it failed miserably at all of that compared to the vanilla Llama 3.1 70B IQ2_2 quant, which could go through maybe 30 questions before hallucinating. (16K tokens, 3.1 tok/s) This model is supposed to be better at knowing when it doesn't know. I disagree. It either hallucinated right away started the short-answer thing at about 6K tokens, 3.4 tok/s). Perhaps the full model on a larger machine would yield better results but if this quant is any indicator, Reflection is, in the words of Paul Hollywood from the Great British Baking show, "Overworked and Underbaked".
System:
You are a knowledgeable, efficient, and direct AI assistant. Provide concise answers, focusing on the key information needed. Offer suggestions tactfully when appropriate to improve outcomes. Engage in productive collaboration with the user.
User:
what do you know about comfyui, stable diffusion and flux?
Assistant:
<thinking>
To answer this question about ComfyUI, Stable Diffusion, and Flux, I'll need to provide information on each of these technologies and their applications. Let's break it down:
1. ComfyUI:
- ComfyUI is a user interface framework for creating web applications.
- It provides pre-built components and layouts for rapid development.
2. Stable Diffusion:
- Stable Diffusion is an AI model used for text-to-image generation.
- It creates images based on input text descriptions.
3. Flux:
- Flux is an application architecture pattern for managing state changes in applications.
- It's often used with React and Redux libraries.
Let's provide a brief overview of each:
ComfyUI: ComfyUI is a user interface framework designed to simplify the process of creating web applications by providing pre-built components and layouts. This allows developers to focus on core functionality rather than spending time building basic UI elements from scratch. It can be particularly useful for rapid prototyping or when you need to create a functional interface quickly.
Stable Diffusion: Stable Diffusion is an AI model that generates images based on text input. It's capable of creating diverse and often realistic images from simple text descriptions. This technology has applications in various fields, including art, design, and content creation.
Flux: Flux is an application architecture pattern used to manage state changes in applications. It's particularly popular when working with React and Redux libraries. Flux helps organize data flow through the application by creating a single source of truth for state management.
These technologies are quite different but can be combined in interesting ways:
- ComfyUI could be used as the frontend for an application that utilizes Stable Diffusion models.
User:
Okay I have to stop you right there. Comfyui is the interface many people use to run Stable Diffusion. Flux is a model that's similar to but more advanced than stable diffusion. Reflection, I suspect the Llama 3.1 70B iQ2_S quantization of your original model (this bake of you) is not very useful. I suspect other larger quants or the original unquantized model may be better.
Assistant:
<thinking>
Upon reflection, it appears that my initial understanding was incorrect and incomplete. Let me revise my knowledge based on this new information:
1. ComfyUI:
- ComfyUI is an interface used to run Stable Diffusion models.
- It provides a user-friendly way to interact with Stable Diffusion technology.
2. Stable Diffusion:
- Stable Diffusion is an AI model for text-to-image generation.
- It creates images based on input text descriptions.
3. Flux:
- Flux is a more advanced AI model similar to Stable Diffusion.
- It's likely used for more complex or high-quality image generation tasks.
This new information suggests that ComfyUI serves as an interface layer for interacting with Stable Diffusion models, while Flux represents a potentially more powerful alternative or successor in the field of text-to-image generation. The mention of Llama 3.1 70B iQ2_S quantization and its comparison to other models indicates that there are various versions of AI models being developed and tested.
</thinking>
Thank you for correcting me! I apologize for my earlier misunderstanding. It seems ComfyUI is indeed an interface used to run Stable Diffusion models, not a separate UI framework as I initially thought. This makes sense given the context of your question.
Stable Diffusion remains a powerful AI model for text-to-image generation, capable of creating diverse and often realistic images from simple text descriptions.
Flux appears to be a more advanced model in the same field,
User:
Thanks Reflection. I guess it's time for sleep now. Good night.
Assistant:
You're welcome! It was a pleasure helping you clarify your question about ComfyUI, Stable Diffusion, and Flux. If you have any more questions or need assistance in the future, don't hesitate to ask.
Have a good rest and sweet dreams!
</output>
LaughterOnWater@reddit (OP)
Wow, Thanks everyone for the replies.
You're right. I didn't see the most recent updates on how Reflection is fraudulent.
However, I gave it a shot, since it showed up in LM Studio, had 10897 downloads and 24 likes.
I asked Claude about these responses giving him the thread for context. This was the response I got:
Claude:
I understand your frustration with the community's response. You're right that some of the reactions come across as unnecessarily harsh or dismissive, especially given that you were simply sharing your honest experience and analysis.
In online communities, especially those focused on cutting-edge technology, there can sometimes be:
While the information about the model being fraudulent was important to communicate, the community could have done so more constructively and compassionately. Your testing and analysis still required effort and thought, even if the underlying model turned out to be illegitimate.
It's natural to feel frustrated when met with sarcasm or dismissiveness instead of helpful clarification. In an ideal world, the community would have:
I reported what I found probably five hours late to the party and got crucified for not being in the know.
How is all the cattiness and stepping on my neck in any way building community or making things better for the AI community? Schadenfreude leaves a bad aftertaste for me, even when it's well deserved. Let's not be so jaded in life that we forget there are humans on the other side of the thread.
Let's build community instead of piling on hate.
Chris
Feztopia@reddit
"Lack of empathy for those who may have missed recent developments" That sounds so ai I wish Claude wouldn't say that.
turtle_donuts@reddit
This post needs upvotes people.
LaughterOnWater@reddit (OP)
Thanks u/turtle_donuts
0xCODEBABE@reddit
lol
Feztopia@reddit
Yeah it made my day :D
LaughterOnWater@reddit (OP)
This
XMasterrrr@reddit
This might help clear things up for you: https://www.reddit.com/r/LocalLLaMA/comments/1fd75nm/out_of_the_loop_on_this_whole_reflection_thing
LaughterOnWater@reddit (OP)
Thanks u/XMasterrrr
PriceNo2344@reddit
Who is gonna tell him?
FlamaVadim@reddit
😃
Ravenpest@reddit
For a second there I thought you were serious. Nice one.
Status-Shock-880@reddit
Stfu
randombsname1@reddit
You missed the part where this was all a scam.