Cayne's QA Blog

“Someday, someone may experience these bizarre events. Hopefully, they will find my notes useful.” – Harry Mason

Blog Roll | About | Table Of Contents

Is It Trustworthy? Pt.1

Before we get started, time for a retrospective!

Second blog post word count ~300
Last blog post word count ~1500

That quickly ballooned! I feel like we’ve strayed somewhat…

New plan! We’re going back to keeping them short and sweet so I can pump them out in a more regular cadence. I was wondering why it was getting so much harder! Right, back to my more recent QA experiences and experiments!


I Say ‘Unit Trust’ — But I’m Probably Stretching the Definition a Bit

Cast your mind back, (or if you were never there, go back (link)), to the Submissions to Sprint series where we discussed “Why It Didn’t Work” and my lovely example of a potential scenario:

“Ooooh, the UI looks a bit risky. It should be at Alpha, but it has eight open Showstopper bugs (purple label)… maybe the dev needs more support?”

That is what we’re going to address in this series, a way of calculating an application’s “health” through analysing various bits of data we collect during development and testing. At this point in my career, I had read a handful of books about testing but most were quickly dated or not specifically about game development. But all covered Test Monitoring and Control and well as “Unit Trust” and so I tried to come up with a way that allowed for simple visualisation of application health for the projects I was currently working on and repurposed the knowledge in those books for my needs.

I would like to introduce the Untrusted, Credible, Reliable and Trusted (UCRT) ranking system! It is applied to everything in a project, features, systems, Golden Paths even DevOps. As it covers everything in a project and they’re all “concepts” in their own right, these are what I am referring to when I say “Unit Trust”.

Have I stretched the definition? Absolutely. But the core ideas of Unit Trust helped shape this system — and it works, so I’m sticking with it.

So What’s The Idea?

Simple really, each unit has a trust level which is visible on a monitoring dashboard for the project. Like the OFT Trello, it’s informative at a glance but unlike Trello, it’s rooted in real, quantifiable data!

Data like:

  • How often a unit has been tested?
  • How often bugs are found in that unit?
  • How severe are the bugs being found?
  • How many bugs are still open in the unit?
  • How quickly are bugs being fixed?
  • How deep is the coverage of the unit?
  • What stage of development is the unit at?
  • How often is it not testable?

Juicy! Raw and juicy data!


What To Do With All The Data

The ins-and-outs of exactly what I do with that data is a trade secret but all about giving certain events “scores” and others a “factor”. Do some fancy maths and you get yourself a score that can be applied to a scale which outputs a rank of either Untrusted, Credible, Reliable or Trusted. All that testing boiled down to one word that, when the project is locked down before a milestone but the unit isn’t quite up to the deliverable, a Producer can check the trust level and make the call as to whether or not lift the lock and finish the feature.

“It’s Trusted, go on, make the change.”
“It’s only Credible, leave it as is and let’s build note it.”

And that’s just the start of it!

In the spirit of the blog, I’m going to end it here today but I have the urge to keep typing!

Next week, I’ll expand a bit more on the scores, factors and what else the data can be used for.


Leave a comment