Insight

How Robotic Integrators Can Save Money With Better Testing

Keali'i Wichimai, Go West RoboticsKeali'i Wichimai

Testing is likely the most essential part of the development process but typically takes a backseat when deciding how to allocate your resources. But is it possible to deliver optimized code with a great set of features and a test suite while still meeting your delivery date? Yes, it is!

orange robot
Building a comprehensive test suite will allow you to spend less time on releasing hardware, testing cells, and on-site support, which frees up your team to focus on what matters.
orange robot
Building a comprehensive test suite will allow you to spend less time on releasing hardware, testing cells, and on-site support, which frees up your team to focus on what matters.

Software Testing Principles

The first step to creating a comprehensive test suite is to outline a set of software testing principles. The following are a broad set of guidelines that can help you develop an approach that best suits your team:

  • Testing should be set up to find bugs or defects – While this may sound obvious, the issue many development teams face is that their tests are “designed to pass.” Test cases should not be designed to handle only expected paths but should be designed with edge cases in mind and test cases that are expected to fail. A good understanding of how users will interact with the system and how the system fits within its business process is critical to identifying edge cases. Effective test case design starts on day one of a project by capturing appropriate requirements and accurately defining comprehensive user stories.
  • Exhaustive testing is not the best approach – The goal of a test suite should be to test each functional section of your code, not to test every aspect of your code in every possible way. Although edge cases should be identified and tested (see above), make sure the team is not investing in test cases for situations that will never occur in a real-world setting. You can use this simple heuristic to help you scope your tests: every code path should have a test, and code paths that never occur in real-world situations should not exist in your code.
  • Early testing – Test early and test often. Waiting to test code is an expensive mistake and can cause significant downtime to identify the source of the issue. Make it easy for developers to run tests on their code. Ensure tests are built alongside new code, which are automatically executed whenever changes are merged into the code base, thereby eliminating the need to test these lines of code in the future manually. Provide access to cloud resources and run tests in parallel, if needed, to ensure tests run quickly.
  • Review and update test cases – Periodically, test cases should be reviewed and modified to encompass new use cases to keep testing relevant. If code is continually tested in the same way, your codebase could become “immune” to your test suite and limit its ability to identify new bugs or defects. Teams should review test case coverage as part of each PR review. For example, there is no need to review cases for code that did not change, but regardless of sprints or releases, if a test was covering line 12, and line 12 was changed, the test case should be reviewed.
  • Context matters in testing – Where and how to test all depends on how the system is used. Understanding the use cases for your system and the data that is required to accomplish each task is paramount in developing meaningful tests. Importantly, this understanding must be shared in a format that is easy to understand for developers and it should be treated as a “living” document, which is frequently updated when new information or requirements are obtained.
  • Defects/Bugs tend to cluster – It is common for a large percentage of issues to be related to just a few components of your codebase. Focusing your testing on these areas is important to address issues but identifying where these problems are coming from tends to be a much more challenging step. Target these critical areas early on when designing your test plan by identifying complex or pervasive functions.
  • Absence of error – Determining that your code is bug/defect-free because no errors were found in testing is an expensive mistake (and relatively easy to make). If the initial requirements were incorrect, out-of-date, or incomplete, the testing performed downstream could entirely miss the user’s expectations. The absence of errors in your test results doesn’t equate to a lack of bugs
  • Test results should not be “Pass/ Fail” – Binary test results don’t provide helpful feedback to developers. Your testing suite, and your test cases, should be designed to provide context when tests don’t execute successfully. For example, automatically including logs and specific information about which lines of code were executing, in addition to the current state of the system, with test results can help developers pinpoint bugs and resolve them more quickly.
  • Require tests to pass before code is moved to production – It is critical that successful test execution is required before code is promoted to production. Beyond the potential impact of bugs being introduced from a single haphazard or rushed release, lax release procedures can create a culture of complacency that is hard to change. Don’t make exceptions. Automate the release process to require successful testing before code is promoted, if possible.

Building a Testing Strategy

A well-executed testing strategy can be a less costly and equally effective alternative to full system and post-release testing. A practical test strategy should outline the types of testing to be performed with exit criteria and priorities for each type of test. Exit criteria should be clearly defined and align with the system’s functional requirements. The goal of the testing strategy should not only be to identify bugs but to do so as early as possible. The longer it takes to identify a bug, the more expensive it will be to resolve it. Below is an overview of various types of testing that might be incorporated into your testing strategy:

  • Unit Tests: The testing of specific lines of code to determine if they operate as expected
  • Integration Tests: The testing of various modules/components of code to determine if there is any conflicting or unexpected behavior
  • Full System Tests: Testing on the completed system to determine if the functional requirements have been met
  • User Acceptance Tests: Testing with the end-user to confirm that the system operates as intended and meets their requirements
  • Burn-In Tests: An extended test that is meant to identify any stability issues or defects that would have otherwise gone undetected during a typical test
  • Volume Tests: Testing that subjects the system to an excessive amount of input to identify any bottlenecks and capacity limits

The Go West Approach

At Go West, we believe in continuous testing throughout the development process. To do this cost-effectively, we built our testing into an automated test suite that runs at every stage from initial development to the final release. By integrating cloud-based version control into our release process, we are also able to identify all post-release bugs and the versions that are affected. Leveraging the cloud also allows our clients to reduce their on-site support time by enabling them to resolve and implement fixes remotely. Additionally, any fixes applied to one client can then be remotely pushed to all affected customers.

We enjoy helping robotics integrators produce the best automation experience possible for long-term success and scalability. Want to know more about our process? Please send us a note and let us know how we can help you.

Is there a topic you need help with or would like us to cover? Drop us a quick note and let us know: insights@gowestrobotics.com

Want to learn more?

We'd love to talk to you. Contact us to see how we can help.

Trending Topics


























Go West Robotics, Inc.
350 N 9th St, Ste B80
Boise, ID 83702

RIA Member