November 22, 2017

Webinar Notes: Justin Ison, "Automated Exploratory Testing": Crawlers and Data Gatherers

Imagine if you could write an app that would handle the monotony of gathering all the screenshots when performing user interface testing on web, mobile and desktop apps. It could randomly crawl though the app under test and collect screenshots that compared and contrasted differences between:

  • Browsers and platforms, such as IE9, IE10 & IE11 on the PC, Safari on the Mac, Chrome, Firefox, Safari on the iPhone, and Chrome on an Android device. 
  • Mobile devices such as a variety of Samsung Android devices, iPhones, and tablets.
  • Various screen resolutions and breakpoints for web & mobile apps that have a responsive web design. 
  • Various orientations, such as portrait and landscape. 
  • Localization Testing: How the site keeps (or doesn't keep) its layout when the text is changed to Spanish, German (with its much longer words), Russian (with its Cyrillic alphabet), Arabic, or Hindi. 
  • How mobile apps behave if you do taps, presses, long presses, or swipes. 
Justin Ison@isonic1 >, Senior Success Engineer at Applitools, did just that! 


If his name sounds familiar, it is because I wrote in Mid-November about attending an Applitools Eyes training session that he gave me.

Automated Exploratory Testing



Agile Software Development moves fast.

The time to market gets shorter and shorter.

More and more companies are using QA-Less companies, relying on what Justin -- a former Microsoft employee -- refers to as telemetry (analytics).

... What is a QA Engineer at one of these companies to do?

The problem with relying on solely analytics instead of a QA Engineer?

  • You don’t see design flaws  
  • You aren't catching UI issues
  • You are opening up your company to poor opinions about the quality of your product and your company. 
Take the Testing Pyramid, a testing component model Adventures in Automation blogged about before. It consisted of, as Justin saw it, of:

  • Automated Gui Tests: Used sparingly
  • Automated API Tests
  • Automated Integration Tests
  • Automated Component Tests
  • Automated Unit tests: All testing rests on this
What if Justin could create an app that covered what he thought of "Automated Exploratory Tests" (AET), a bot that would randomly explore an app, hunting, gathering, and storing data for later analysis by a trained QA Engineer? That AET could rest on top of the pyramid with the graphical user interface testing.

Justin goal was to do the best he could to figure out a way to do automated exploratory testing,  shortening the release and testing time.


Gathering Data Is Time Consuming


In my own personal experience, gathering all the data, the screenshots, and comparing and contrasting is a very risk-prone activity. The more data gathered, the more details get missed due to tester fatigue.
  • Let's say that we want to test a five page web application in MS Edge on the PC, Chrome & Firefox on the PC and Mac, and Safari on the Mac. That's five different pages with six different browsers ( 5 * 6), which is thirty screens to review. 
  • If the entire web application is responsive, changing its look depending on the size of the screen, such as the width of a large desktop, a tablet, or a mobile phone, that becomes (5 * 6 * 3) ninety screens to review. 
  • Performing this full range of testing on four similar sites, such as Stop & Shop, Giant-Carlisle, Giant-Landover, and Martin's, and all (5 * 6 * 3 * 4) three hundred and sixty will drive you mad!
Let's say a tester only spends five minutes per page to check: 
  • The basic layout
  • The text, from the font color, font text and font size
  • The images and their quality 
  • Every aspect, from the header text, to all the content and text down, down, down below the fold, all the way to the footer navigation
... Five minutes * 360 screens divided by 60: That is 30 person hours to test everything sufficiently! 

Now, imagine doing this every two weeks...

Welcome to the wonderful world of regression testing! 

Welcome to my world! ... Now you know how I became so gung-ho on automation development.


Should "Automated Exploratory Testing" Get a New Name?  


While I was listening to Justin talk about what he called "Automated Exploratory Testing", I cringed a bit, worrying that by describing his screenshot collection bot using those words, Automated Exploratory Testing, that he may have accidentally opened up a can of worms. 

You see... I've found in the software industry a mistaken belief that with automated testing as part of a continuous integration environment and paired with DevOps, all Quality Assurance Engineers can be shoved out the door. 

The Quality Assurance field started shoving back. 

There exists a school of thought called Rapid Software Testing by James Bach of Satisfice, Inc and Michael Bolton of DevelopSense that has drawn a line in the sand on how QA Engineers -- excuse me -- "software testers" -- describe how they talk about the work they do. 
  • Michael Bolton put a stake in the ground with his November 2017 article "The End of Manual Testing" arguing against ever using the term "manual testing" to describe testing not involving coding. 
  • James Bach in March 2013 wrote "Testing and Checking Refined" arguing that ONLY human beings can learn "by experimenting, including study, questions, modeling, observation, inference, etc", therefore automation should NEVER be called "automated testing". 
  • James Bach also advised people use the simpler term "Tester" instead of "Software Quality Assurance Engineer" or "QA Engineer", in his 2013 blog entry, "To The New Tester". It's actually a misnomer, since as Michael Bolton once asked me, "How can one assure quality"?
When I started getting active in the online software testing community outside my local community back in January, I quickly realized I'd accidentally start a flame war on the internets if I used the commonly used words in the Boston software field -- terms that I adopted when I first joined the software testing world two decades ago and still identify with -- "automated testing", "QA", "Quality Assurance". 

... My experience butting against this has been hideously stressful. What would their reaction be when they saw Justin use the phrase "exploratory testing" linked with automation? 

Capabilities of the Crawler


With the bot Justin was using, the automation did not actually replace the exploratory testing. The automation simply gathers the data, screenshots, and metrics. You still would need a trained QA engineer to analyze and make sense out of the data.
The crawler bot captured elements for every unique view:
  • For every reslotion 
  • For every orientation 
It found: 
  • A solution for everyone to easily see the current app state, even if they didn't have the image in front of them,
  • Language and localization issues 
  • Performance data 
  • Unique telemetry data, randomizing testing
It also could replay a crawl after a code fix.

UI Checking:
  • What does it look like on a tablet? Do the images look stretched? 
  • Accessibility detection: Do all elements have accessibility labels? You can map it to the UI.


Performance Testing: 

What Justin terms a “Forgotten Test”.  
  • “It is imperative to know more about what’s happening under-the-hood of your mobile application”.
  • You are "monitoring the memory, cpu and application size.”
  • You could get this information. Store it. Benchmark it. See if performance is getting better or worse.

Language Detection:
Justin dug into all the open source translation libraries. There were seven in them he found ... but none of them worked the way he wanted, so he relented and used a paid service: Google Translate.

It provided the best results compared to any open source tools he tried. To save costs, he didn’t do it every time, just before every release.

Log Monitoring:

“When performing exploraory testing, it’s very important to monitor the logs at the same time".

Many errors go unnoticed in the UI such as the network API or memory errors.


Future Plans? The Crawler Will Be Converted to an Open Source Project!


Coming soon, the crawler he created will be released to GitHub.com! It consists of:
  • Operating from a Command Line 
  • Integrated with Appium
  • Rules where it can and can't go is written in TOML


Parting Words? Creating His Tool Has Had Its Ups and Downs


Justin has been somewhere between "Hrm" and "Hey!!!!" ... Thank you Justin for sharing your experience!















Happy Testing!

-T.J. Maher
Twitter | LinkedIn | GitHub

// Sr. QA Engineer, Software Engineer in Test, Software Tester since 1996.
// Contributing Writer for TechBeacon.
// "Looking to move away from manual QA? Follow Adventures in Automation on Facebook!"

No comments: