How To Get More Results Out Of Your Software Testing

How To Get More Results Out Of Your Software Testing

January 30th, 2023 by Daniel MacKenzie

If you’ve ever worked on a home renovation project, you’ve probably learned “measure twice, cut once” the hard way. Ensuring you are ready to cut an expensive piece of lumber means double-checking—sometimes triple checking—your numbers before that saw comes down.

The logic behind that metaphor applies to software development too. Instead of measuring, we’re talking about testing. While measuring a cut is simple, testing can often be complex. It includes functional and non-functional testing, manual and automated testing, and sometimes even those that don’t cover every use case.

Testing isn’t only something that happens at the end of a project either. We build test plans into every phase of a project, from accessibility testing during design work to system testing as we’re progressing on app development. 

In this post, we’ll talk about the different types of testing and a tool that helped us track down an odd use case we didn’t expect.

What are the different types of testing?

Software testing can be broken down into functional and non-functional testing. Functional tests are designed to verify the functionality of an application, API, or other software projects. Non-functional tests verify things like accessibility, security, and performance.

Examples of functional testing

  • Unit tests are designed to test classes and components. If the application includes a function that saves a record in a database, you could create a unit test to verify the record was saved correctly.
  • Integration tests verify that two or more functions in a system work correctly. An example would be verifying that a ticket purchase correctly processes the payment and that the virtual ticket is sent to the customer. Integration tests can include hardware functions too. If the application takes a photo using the smartphone camera, the integration test will verify the photo was taken and available in the application.
  • System testing tests the entire application. These tests are also referred to as end-to-end tests. For these types, quality assurance teams typically perform manual testing on various devices, operating systems, and browsers if the app is web-based.
  • Acceptance testing is usually the finish line of testing. At this point in the software development life cycle, the application is in a state where the client can begin testing.

Examples of non-functional testing

  • Security tests are designed to ensure that the application is as safe as it can be from malicious attacks or code. These tests verify that passwords and other personally identifiable information are not stored or transmitted in plain text, that password requirements are enforced, and that the latest security patches for any third-party libraries are working correctly.
  • Compatibility testing ensures that the application works smoothly regardless of the device, operating system, or browser. This is equally important whether it is a mobile or browser-based application. There are now hundreds of combinations of Android and iOS-powered devices and operating system versions that users might have. Performance testing helps define what the minimum application requirements are for the application.
  • Performance testing is critical for the backend portions of an application. These tests verify the scalability of databases, APIs, and other server technologies to ensure that a surge of users won’t crash the application.
  • Usability testing is one of the most crucial tests and often involves heavy manual testing to ensure the application is accessible to all users.

Testing in the field

Even with a dedicated QA team, some edge cases still occur once an application has been released. This can be especially true for an application like BrainFit - Free Habit Tracker since it’s designed to enable users to create various personalized habits. 

In this case, our CEO Wes Worsfold encountered an issue in his daily use of the application that our QA team had trouble recreating. While Wes uses an iPhone 14 with the latest operating system, our team couldn’t achieve the same results following the steps provided in the bug report. These types of issues are called edge cases.

Edge cases are issues that happen on specific devices at different times. They’re tough to capture, especially when in development in our testing environments. 

We’re always looking for ways to improve our processes—and Wes’s edge case needed a new tool that could provide us with information and insight from a distance. We never know when these issues will occur, so we need to track what the user is doing for long periods. 

Our research led us to LogRocket. LogRocket helps find issues faster by logging what a user does in the application, regardless of how long their session is. We can then “replay” what the user was doing to narrow down the steps that caused the application not to work correctly. LogRocket captures everything the user does in the UI and network activity and anything else the application does on the device, such as saving data to a local datastore.

For this testing, we created a unique build of the application for Wes to use. We omitted LogRocket in publicly available versions due to the amount of data it collects. 

After a day of usage, we could use the data from LogRocket to track how he entered and managed habits in BrainFit to replicate in development. With the mystery solved, we made updates to the production application and released a new version for users worldwide.

Even while we “measure twice” with a complete test plan, some edge cases will still occur in the field. Tools like LogRocket make it easier to find those edge cases and help us deliver the best user experience for all users on whatever device they’re using.


May 16th, 2023 by Joe Reda
An Intimate Afternoon with Sam Altman: Insights on ChatGPT & the Future of AI from CTO Joe Reda and 30 Pioneers
February 10th, 2022 by Alex Kinsella
LiftOff launches to support Black entrepreneurs in Waterloo Region
June 14th, 2018 by Jack Mitchell
Interview with Attila Schmidt, BitBakery’s Director of UX/UI