what are peoples experience with static verifier tools?
Posted by No_Technician7058@reddit | ExperiencedDevs | View on Reddit | 23 comments
has anyone ever tried static verifier tools before? I happened along a static verifier for rust called prusti & it seemed like it could fit as a nice in-between of linters & unit tests. i tried it out and really liked it and was super impressed with how much i could cover with just static validators.
unfortunately, it seems like it never made it out of the prototype stage, as validators for std-lib have been sitting on a development branch for +2 years and there isnt a ton of activity on the project.
this got me interested in static verifiers in general. for example, Java has one called The KeY Project. Python has one called nagini, also based on the viper project similar to prusti.
anyone use anything like this before? just curious about experiences with static validators and if people have seen them used effectively by their teams. i have used linters and things like sonarqube but it always left me feeling like I was doing more work cleaning up rule violations than flagging real issues. but the validator felt like it was actually pointing me to real problems when i was using it.
note: i am not affiliated with any of these tools.
casualfinderbot@reddit
I don’t understand how this is any different from linting
No_Technician7058@reddit (OP)
linting doesnt verify the logical correctness of the code, whereas these kinds of tools allow for statically determining if any panic or unreachable macros are actually reachable.
they also allow for specifying pre and post conditions for functions. so i might have a function called "pop", that can only be called if the list has at least one item.
with a static analyzer, it could tell me if i am ever calling "pop" in a location where the list would be considered empty.
so while a linter uses rules, a static verifier can validate the preconditions for using functions are met in places the functions are called.
it can also verify if the conditions of the implementation are met. so i can say
and then if i were to change the implementation of new to insert a default item initially, the static verifier would tell me ive violated my "ensures" and i could try and figure out what the bug is.
Consistent-Art8132@reddit
That’s pretty cool! It reminds me of design by contract and property testing. Have you tried out QuickCheck for rust, or languages with design by contract built in?
Of course that’s not everything you’re describing, but if I’m understanding correctly, it’s close
No_Technician7058@reddit (OP)
i dont know enough to know the difference between design by contract and static verifiers, they may be the same class of tools.
i havent seen QuickCheck but i will take a look! thanks!
Consistent-Art8132@reddit
I’ve come across another library you may be into. Kani is a model verifier for rust. It attempts to prove that there’s no input that could make your function fail or report a case where it will fail
It’ll catch overflows, panics, null pointer dereferences and so on
No_Technician7058@reddit (OP)
thank you very much for sharing it with me! i found creusot recently but have never seen kani! appreciate it!
Consistent-Art8132@reddit
Design by contract verifies certain properties of a function or structure. For example, for a version of a pop function the following should be true. Preconditions (the list is not empty when passed to the function), postconditions (the list has a length one less than prior) and an invariant for the list itself common to all methods (the length is never negative)
Some languages have design by contract built in and give access to the “old” values and “new” and will throw if conditions are violated. This can be done in other languages, but it’s not as ergonomic to put validation functions at the start and end of every method
I think it’s a super cool concept to limit what states you’d like to handle up front. I’d like to get into it more at some point!
SweetStrawberry4U@reddit
This is exactly what Unit-test based code-coverage achieves, that is, only if unit-tests are written "strategically" covering all the necessary scenarios - don't invoke pop when the list is empty sorta, but then, code-coverage doesn't necessarily guarantee a "spec conformity" either. you could achive 100% code-coverage despite missing a few test-case scenarios.
Ideally, i'd have Acceptance Criteria for every tech-implementation, and a unit-test for each such Acceptance-Criteria. That way, you are covering "all possible run-time scenarios" while ensuring the code is essentially bug-proof at run-time. But that's a whole lot of documentation for the Acceptance Criteria, and a whole lot of unit-test scenarios, that eventually pose the question of ROI !! Also, writing so many unit-test case scenarios only increases the time-to-Production multi-fold for the tiniest of trivial code changes.
No_Technician7058@reddit (OP)
i guess the point is where with unit tests you need N tests checking pop is used properly everywhere its called & must run the tests. with a static validator, it simply tells me when i write the code if the precondition to call the method has been met or not in my IDE. i dont need to worry about making this mistake anymore. if im calling pop and the ide is cool with it, I'm all good.
behusbwj@reddit
While static analyzers are cool, be careful about using them for anything more than basic checks. They should not be a substitute for tests.
I think in this case, since you’re the one selectively applying rules, that’s a plus in my eyes. Many of the negative opinions you’ll get are from people who have to run these things over the whole codebase and turn it off at certain lines or components because of false positives.
Also consider that many of us work in codebases orders of magnitude larger than what you’d regularly find in open source. These validations are not cheap and will add up over time. Efficient static analysis is an active research area in computer science.
MrEloi@reddit
In my (non web) career I always used as many static - and dynamic - tools I could find.
I would ensure that every warning etc was fixed - otherwise why use the tools at all?
I would also use dynamic test tools and instrumented run-time libraries to detect errors at execution time.
I would also use Monte-Carlo test harnesses to exercise the code for hours, sometimes days.
I would use code-coverage checkers - and also profilers to find (and then rewrite) the 'busy' parts of code .. especially any pathologically CPU greedy sections.
Only then would I allow the code to run on the target platform - which usually simply crash with a less-than helpful memory dump if any errors were encountered.
All this pre-testing allowed me to often get a 100% successful first-time run on the production platform.
Note: Working with a safety critical language such as Ada, or the MISRAC-C subset of C, can help prevent errors up front.
No_Technician7058@reddit (OP)
i have to do some reading on ada as i would love to learn more about ways of improving language safety and preventing programming errors
CalmTheMcFarm@reddit
When I worked in a particular OS' kernel codebase I added our company's r&d org-developed static analysis checker to the standard build pipeline. That meant we could have almost instantaneous results available, checked against an immense array of known issues and issues derived from rules which were provably correct. Over time we solved quite a few niggly and hard to reproduce bugs because this tool was able to pinpoint where the problem was visible, and trace back through all the layers to where it occurred and why.
Since I moved out of that space and to a different company we're making use of SonarQube, Veracode and CAST. All of which are ok but the way we've got them configured is that you have to access reports via a webpage rather than right there in my build log - I miss that a lot. The niggle I have with CAST is that I don't like its messages - quite a few of the ones I've seen so far are poorly written and could easily be improved.
In general I think static analysis tools are very useful. Some tweaking might be required for your application's needs, but it's well worth the effort.
crabperson@reddit
I just wish we expected languages to self-enforce these basic safety checks. I get that most people don't want to deal with Haskell-style types like this:
pop :: [a] -> Maybe a
But it seems like the compiler could at least throw a warning if you write a function that bombs on an empty list.
No_Technician7058@reddit (OP)
yeah i agree.
HTTP404URLNotFound@reddit
We tried using static verifier tools once on our C++ codebase but the burden of maintenance of stuff like function parameter annotations was very high. We specifically tried using SAL and using MSVC's code analysis tool to do it for us. It also generated a bunch of false positives that were burdensome to sort through so we gave up.
No_Technician7058@reddit (OP)
thanks i figured that must be a typical experience given how unpopular they seem to be in broader circles
Orca-@reddit
My experience in the C/C++ world has been mixed.
They need careful tuning and appropriate examination of the errors reported because I’ve found that frequently the reported problems…aren’t. And there is a management tendency to want to turn on everything.
But they have uncovered a few possible divide by zero and array out of bounds reads or writes for me.
They’re expensive and I don’t know that they’re worth the expense and complexity they add to the build but they aren’t valueless.
YMMV depending on the tool and the language in question and the exact setup configuration.
No_Technician7058@reddit (OP)
i think this is a static analyzer that runs rules against the code rather than static verifier, my understanding was static verifiers wouldn't ever flag false positives, whereas an analyzer might.
for example flagging cyclomatic complexity would be done by an analyzer whereas a validator would only be concerned with if its possible to hit unreachable code or a panic, or a function is being called without its preconditions being appropriately met.
however i apologize if i am incorrect that is just my understanding.
DeltaJesus@reddit
Mixed tbh. I think having one as part of our GitHub actions has made me be a bit more defensive than I otherwise would have in some of the stuff I write, but I've also definitely lost a decent chunk of time refactoring perfectly serviceable code to make it happy and they can be a real pain when dealing with some of the shitty old data we sometimes have to. We also had to dial it way down for our unit tests after a while but I think that's expected really.
That's assuming that a static verifier is at least similar to a static analyser, I couldn't really find anything concrete about the differences if there are any from a quick Google.
No_Technician7058@reddit (OP)
a static verifier is a type of static analyzer. a verifier verifies the logic of the code.
for example a verifier allows for setting preconditions on functions and then seeing if those preconditions are met when calling the function. the example i gave before is not allowing code which could potentially be calling "pop" when the list is empty.
whereas a static analyzers are typically thought of as running rules against your code and looking for matching violations, which may or may not be helpful.
sometimes these do the same thing, sometimes an analyzer can tell you you are potentially popping from an empty list. but (theoretically) a static verifier should never have false positives, unlike rule driven systems.
this is mostly my understanding of the difference. can see this for more details: https://www.pm.inf.ethz.ch/research/viper.html
mattcrwi@reddit
They're totally useless and a waste of time. Just have an understanding to minimize liner warnings and that's all you need.
No_Technician7058@reddit (OP)
what makes you say this? it seemed really useful when i was using it.