what are peoples experience with static verifier tools?
Posted by No_Technician7058@reddit | ExperiencedDevs | View on Reddit | 6 comments
has anyone ever tried static verifier tools before? I happened along a static verifier for rust called prusti & it seemed like it could fit as a nice in-between of linters & unit tests. i tried it out and really liked it and was super impressed with how much i could cover with just static validators.
unfortunately, it seems like it never made it out of the prototype stage, as validators for std-lib have been sitting on a development branch for +2 years and there isnt a ton of activity on the project.
this got me interested in static verifiers in general. for example, Java has one called The KeY Project. Python has one called nagini, also based on the viper project similar to prusti.
anyone use anything like this before? just curious about experiences with static validators and if people have seen them used effectively by their teams. i have used linters and things like sonarqube but it always left me feeling like I was doing more work cleaning up rule violations than flagging real issues. but the validator felt like it was actually pointing me to real problems when i was using it.
note: i am not affiliated with any of these tools.
casualfinderbot@reddit
I don’t understand how this is any different from linting
No_Technician7058@reddit (OP)
linting doesnt verify the logical correctness of the code, whereas these kinds of tools allow for statically determining if any panic or unreachable macros are actually reachable.
they also allow for specifying pre and post conditions for functions. so i might have a function called "pop", that can only be called if the list has at least one item.
with a static analyzer, it could tell me if i am ever calling "pop" in a location where the list would be considered empty.
so while a linter uses rules, a static verifier can validate the preconditions for using functions are met in places the functions are called.
it can also verify if the conditions of the implementation are met. so i can say
and then if i were to change the implementation of new to insert a default item initially, the static verifier would tell me ive violated my "ensures" and i could try and figure out what the bug is.
SweetStrawberry4U@reddit
This is exactly what Unit-test based code-coverage achieves, that is, only if unit-tests are written "strategically" covering all the necessary scenarios - don't invoke pop when the list is empty sorta, but then, code-coverage doesn't necessarily guarantee a "spec conformity" either. you could achive 100% code-coverage despite missing a few test-case scenarios.
Ideally, i'd have Acceptance Criteria for every tech-implementation, and a unit-test for each such Acceptance-Criteria. That way, you are covering "all possible run-time scenarios" while ensuring the code is essentially bug-proof at run-time. But that's a whole lot of documentation for the Acceptance Criteria, and a whole lot of unit-test scenarios, that eventually pose the question of ROI !! Also, writing so many unit-test case scenarios only increases the time-to-Production multi-fold for the tiniest of trivial code changes.
No_Technician7058@reddit (OP)
i guess the point is where with unit tests you need N tests checking pop is used properly everywhere its called & must run the tests. with a static validator, it simply tells me when i write the code if the precondition to call the method has been met or not in my IDE. i dont need to worry about making this mistake anymore. if im calling pop and the ide is cool with it, I'm all good.
mattcrwi@reddit
They're totally useless and a waste of time. Just have an understanding to minimize liner warnings and that's all you need.
No_Technician7058@reddit (OP)
what makes you say this? it seemed really useful when i was using it.