underlap,
@underlap@fosstodon.org avatar

Non-deterministic behaviour in a specification can be a headache for testing. This updated post explores the non-determinism in the JSONPath RFC 9535, describes how the Compliance Test Suite is being upgraded to deal with non-determinism, and shows how non-deterministic tests can be generated automatically. There's also an "explosive" challenge for Haskell programmers.

https://underlap.org/testing-non-determinism

underlap,
@underlap@fosstodon.org avatar

Found a couple of trivial optimisations of my Haskell code which sped it up by a factor of over 20,000 (for the "explosive" example mentioned in the above post). That's sufficient for now. 😉

underlap,
@underlap@fosstodon.org avatar

The beauty of writing inefficient code that is obviously correct is that if it needs optimising, you can actually understand the optimisations.

FenTiger,
@FenTiger@mastodon.social avatar

@underlap This is my approach too. First, make a mess. Then make a well-tested mess. Then tidy it up.

underlap,
@underlap@fosstodon.org avatar

@FenTiger Absolutely! We have identified three dimensions: testing, clarity, and inefficiency. A reasonable order for tricky problems is:

  1. Untested, unclear, inefficient code
  2. Tested, unclear, inefficient code
  3. Tested, clear, inefficient code

then, only if necessary:

  1. Tested, clear, efficient code.

(This is partly reminiscent of "fire, aim, ready" if you've come across that.)

FenTiger,
@FenTiger@mastodon.social avatar

@underlap "Fire, Aim, Ready" - I haven't heard that before, but I like it. I've certainly met a few people who swear by doing things in this order. ;)

(Oh, and I'd suggest skipping over step 1 and going straight for step 2, if that's possible, which it sometimes isn't.)

underlap,
@underlap@fosstodon.org avatar

@FenTiger Ok, to expand on "fire, aim, ready", the idea is to knock up some code in order to explore the problem space. This can (some would say must) then be discarded, and you can then probably start coding proper by writing tests.

One difficulty with skipping step 1 (as in strict TDD) is that it assumes initial understanding sufficient to write a test.

But I agree step 1 should usually be skipped when you're developing something similar to what you've seen before.

iwein,
@iwein@mas.to avatar

@FenTiger @underlap "obviously correct" vs "mess" seems like a contradiction. I love "well tested mess" as a concept!

underlap,
@underlap@fosstodon.org avatar

@iwein @FenTiger My preference is to write obviously correct code from the start. Once my code gets messy, I get bogged down. But sometimes you have to start further back, with a mess, and chuck that away as soon as possible.

iwein,
@iwein@mas.to avatar

@underlap @FenTiger so a process could be:

Mess > tested mess > obviously correct > optimal.

Makes a ton of sense to me 🙏

underlap,
@underlap@fosstodon.org avatar
underlap,
@underlap@fosstodon.org avatar

Came across the fascinating block comment just above this code https://hackage.haskell.org/package/base-4.19.1.0/docs/src/Data.OldList.html#permutations -- complete with a link to a stack overflow answer offering a better explanation!

(Context: a good friend suggested I optimise my Haskell code by algebraically pushing a filter through the implementation of permutations. When I saw the implementation, I found an alternative. 😱)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • haskell
  • GTA5RPClips
  • DreamBathrooms
  • thenastyranch
  • ngwrru68w68
  • modclub
  • magazineikmin
  • Youngstown
  • osvaldo12
  • rosin
  • slotface
  • khanakhh
  • mdbf
  • kavyap
  • tacticalgear
  • JUstTest
  • InstantRegret
  • cubers
  • tester
  • ethstaker
  • everett
  • Durango
  • cisconetworking
  • normalnudes
  • anitta
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines