How does Hedgehog and Hypothesis differ in their shrinking strategies?
The article uses the words "integrated" vs. "internal" shrinking.
> the raison d’être of internal shrinking: it doesn’t matter that we cannot shrink the two generators independently, because we are not shrinking generators! Instead, we just shrink the samples that feed into those generators.
Besides that it seems like falsify has many of the same features like choice of ranges and distributions.
Really? Your examples seem the opposite. I am left immediately thinking, "hm, is it failing on a '!', some sort of shell issue? Or is it truncating the string on '#', maybe? Or wait, there's a space in the third one, that looks pretty dangerous, as well as noticeably longer so there could be a length issue..." As opposed to the shrunk version where I immediately think, "uh oh: one of them is not handling an empty input correctly." Also, way easier to read, copy-paste, and type.
If I understand correctly, they approximate language of inputs of a function to discover minimal (in some sense, like "shortest description length") inputs that violate relations between inputs and outputs of a function under scrutiny.
I care about the edge between "this value fails, one value over succeeds".
I wish shrinking were fast enough to tell me if there are multiple edges between those values.
The article uses the words "integrated" vs. "internal" shrinking.
> the raison d’être of internal shrinking: it doesn’t matter that we cannot shrink the two generators independently, because we are not shrinking generators! Instead, we just shrink the samples that feed into those generators.
Besides that it seems like falsify has many of the same features like choice of ranges and distributions.
Suppose I have a function which takes four string parameters, and I have a bug which means it crashes if the third is empty.
I'd rather see this in the failure report:
("ldiuhuh!skdfh", "nd#lkgjdflkgdfg", "", "dc9ofugdl ifugidlugfoidufog")
than this:
("", "", "", "")
If I understand correctly, they approximate language of inputs of a function to discover minimal (in some sense, like "shortest description length") inputs that violate relations between inputs and outputs of a function under scrutiny.