Qt's latest AI push is letting AI agents deal with performance profiling
Posted by Fcking_Chuck@reddit | linux | View on Reddit | 20 comments
Posted by Fcking_Chuck@reddit | linux | View on Reddit | 20 comments
AGuyNamedMy@reddit
This sub should ban photonic posts tbh, it’s all just clickbait garbage
Kobymaru376@reddit
It was banned for a long time but finally it was unbanned because why would you ban it actually?
How is it clickbait garbage? They post articles about Linux, and not everyone wants to chea through dozens of mailing lists every day to get information
einar77@reddit
So you prefer posts in the form of waves?
Catenane@reddit
Phoronix is fine and it's a good way to stay up to date with random linux shit that nobody else writes about. It's not the most detailed, because that's not the point of it...And it usually links directly to further discussion when relevant. It's absolute gold--I can keep up with everything without spending too much time, and read into topics I care more about by following the links..I regularly learn about new features/changes/whatever that I use at both work and home from phoronix.
This sub IIRC has gone back and forth on banning phoronix, and it's always been a stupid decision to ban it IMO. I regularly keep up with phoronix and it's great for its intended purpose: short form news updates on open source shit that nobody else cares to write about. Sure, Michael sometimes jazzes up the headlines a bit (especially when there's no juicy filesystem drama lol), but it's still nowhere near as bad as any mainstream news organization...
Comments being a dumpster fire is also a plus, because it grounds me--a good reminder that even though I'm a linux weirdo, I haven't gotten to the "wayland is coming to steal your penis" phase.
Traditional_Hat3506@reddit
It used to be until somewhat recently
Epsilon_void@reddit
If you're going to call for something to be banned, at least name the thing correctly.
CatalonianBookseller@reddit
Absolutely! It's not really trash, just a bit clickbaity
Resident-Version-116@reddit
It was probably a mobile autocorrect that wasn't noticed while typing, happens all the ducking time.
FlukyS@reddit
Such a weird thing to downvote, it isn't as dramatic as it sounds, skills are basically a way of telling an LLM how to do something. So in this case you'd write some Python or a bash script describing how to study a specific file and it will run it if you ask and parse the result. It isn't doing the work of profiling or writing code or anything it is just taking one thing and interpreting the results. A good example of how this could work is if you had like a weekly report that you get in excel, you do the skill to parse the excel in a specific way and then you could use the LLM to generate a report email or slack message or whatever to automate without the model guessing how to extract the data or whatever.
omniuni@reddit
The company I work for has AI plugged into the commit pipeline. It gives a summary of the changes, a technical walkthrough, produces a diagram, and highlights what complex areas need further or detailed review. I used to write a paragraph, this is much more comprehensive, and I can review the output and correct any mistakes in the time I used to just write a summary. It's not taking any job, just making mine and my coworkers easier.
PerkyTomatoes@reddit
What about hallucations? I have gotten few cases where AI halluciates some results or removes very important contexts.
My concern with this is, when AI does "good enough" job, they dont actually review the output which happens too often for my taste.
markusro@reddit
Of course , that is the drawback if you rely too much on AI. But there is big but: Quite often a human is too lazy to do the job in the first place, so the error would be there anyway.
IMHO the bigger problem is that, over time, one can lose the overview as well as the in depth knowledge about a project. You have short time gains, like the aforementioned commit pipeline sounds really helpful, but long term you can lose knowledge because more and more of the details may become hidden.
Jacksaur@reddit
To the degree that AI tools can just completely fabricate stuff from nothing: Not really.
JackSpyder@reddit
Humans have been doing that for millenia. There are entire industries dedicated to it.
omniuni@reddit
That's why analysis and code generation are very different things. "Check this area for potential problems" is just a pointer.
DesiOtaku@reddit
It's funny because I never got good QML code out of any LLM, even with Qt's own LLM. Unless you have a very small app which has very little dynamic page loading, it is extremely difficult for any automated (LLM or otherwise) system to figure out what is wrong with your code outside of just running the profiler.
For me at least, I need a good visualization of what is causing the long load times and 99.9% of my performance issues are because of loading a component that should be using a
LoaderorComponentclass. I normally don't need an LLM to tell me that I should have a Dialog class inside each instance of a delegate in a ListView.einar77@reddit
Older models were outright terrible, often creating a lot of code working around a one line issue (when I found out). Earlier Codex versions also polluted the QML files with a lot of unnecessary JS.
More recent ones are better but you need to basically steer the wheel to avoid them going off track.
Square_Attention8461@reddit
Glad Qt has more than three months of commits, otherwise we'd have to remove this.
Other_Fly_4408@reddit
https://reddit.com/r/accelerate/comments/1sng1xd/ai_sessions_at_work/
Square_Attention8461@reddit
That is a post I made, yes? Was the above comment not obviously sarcastic enough?