iOS 13 Bugs Cause Apple to Overhaul Software Testing

Originally published at: https://tidbits.com/2019/11/25/ios-13-bugs-cause-apple-to-overhaul-software-testing/

In the wake of complaints about iOS 13, Apple is revising its internal testing procedures, but the company’s proposed changes don’t address all the concerns laid out by former Apple engineer David Shayer.

I have no evidence, but I will tell you what I expect is the case.

Apple developers, like so many of us, do not know what a successful test is: They think software testing is running tests to see if the software works as intended. If it works, they think that is a successful test.

I don’t want to be ruler of all software, but if the role was forced on me, my first executive order would be that no person could work at software development unless a) they had studied The Art of Software Testing and b) they understood and agreed with what Glenford J. Myers wrote in that book. Wrote 40 (count 'em, forty) years ago.

Software testing is exercising a program with the intent of causing it to malfunction.

“A successful test case is one that detects an as-yet undiscovered error. [Myers]”

I have not read the 3rd Edition, which Wiley is now selling. I hope they have not watered down the message. It should be all in Chapter 2 - The Psychology and Economics of Software Testing (fourteen pages). It’s in the book!

The rest of the book is details based on the facts of Chapter 2. The 3rd Edition was published 8 years ago, so the remaining chapters will not be right up to date. That does not matter. Unless it has been watered down, the message is in Chapter 2.

1 Like

I guess the software developers are hanging out for quantum computer apps to thoroughly test conventional computer software, since quantum computers are supposed to execute every possibility!
Until then they seem to be complacent.
:slight_smile:
I foreshadowed this in 1999:
http://users.tpg.com.au/users/aoaug/qtm_comp.html

Software should be tested by people who have no idea how it is supposed to work.

Although I believe I understand the advantage of doing so, it should not exclude testing by people who know exactly how it is supposed to work. Otherwise it could be release with bugs that prevent it’s intended use.

Software developers generally test their work as they go. But they know how it is supposed to work and how they intend it to be used; so they are naturally testing the intended use, since that is what they are in the process of creating.

People who don’t know anything about the software are likely to behave more randomly in terms of what they click on, what data they enter, etc. So they are more likely to cause the program to go through unintended (and likely untested) sequences of code, a good way to find bugs.

Ideally, those less informed users should be watched as they use an app, because that can be informative about how well designed the user interface is. If they are struggling, then design changes may be called for.

Back in the old days, Apple was known to conduct a lot of such testing. They had human factors engineers and psychologists observe and document how people carried out certain tasks they had been given. They compared results when they gave people different methods to do something and used this as another (heuristic) metric for usability. A lot of effort went into that and while it was of course kept hush hush, it was still published and known that Apple was serious about such testing. I have no idea if anything like that is still done these days. Interface guidelines don’t seem to be taken that seriously anymore either. I guess it’s entirely possible Apple still does a lot of that type of testing, but these days they’re just much better at shushing people over it, but honestly, I have my doubts.

Yes, Duane! “Software should be tested by people who have no idea how it is supposed to work. [Duane Williams]”

Yes, Al! Software should be tested “by people who know exactly how it is supposed to work. [Al Varnell]”

Yes, Duane, there should be extensive usability testing and there should be strict testing of conformance to guidelines (even if the decision may sometimes be made to go against a guideline: that’s how improvement to guidelines can happen, after field testing of proposed changes).

AND Software should be tested by people who are paid to produce successful tests:

“A successful test case is one that detects an as-yet undiscovered error. [Myers]”