Writing High-Quality Test Scripts: Taming the Chaos
Writing High-Quality Test Scripts: Taming the Chaos
8 min read
Nathan Haynes : Apr 15, 2022 11:14:53 AM
This is Part 2 in a series on creating high-quality test scripts. In “Writing High-Quality Test Scripts: Taming the Chaos,” we touched on the complexity of the testing environment, defined what a test script is, compared test scripts to test cases, and touched on some of the considerations for designing and developing test scripts. In this installment, we compare and contrast some of the inputs and outputs, examine the programming languages used, explore the impact of peer reviews, and take a look at the impact of test cases now and in the future.
To better understand the relationships between the various elements that go into creating and maintaining the testing process, a few definitions are in order. Some of these terms have similar or overlapping meanings, depending on the industry using them. For the scope of this article, we are citing the most common usage in the automotive realm:
Use Case: A use case describes how a given system must perform a task under a specific set of conditions. Use cases are outlined with various software or business requirements—which describe how end users will engage with the system—and the various outputs that the end users should receive. Use cases describe how a product is supposed to work, whereas test cases describe how a product is supposed to be tested.
Test Case: A test case is a detailed document that contains a set of actions or conditions that are performed on a software application to verify that the features within the application all function as expected. Derived directly from use cases, test cases ensure that a product is thoroughly tested.
There are many different types of test cases, including formal and informal test cases, and those that test functionality, user interfaces, integration, performance, security, usability, database storage and retrieval, and user acceptance. There are also exploratory test cases. Defining all these types in detail is beyond the scope of this article. However, there are two common truths that apply to all the various types of test cases:
Test cases are a source of truth that ensure proper test coverage, help reduce the cost of software maintenance and support, and improve quality. They enable testers to think things through and approach the tests from many different vectors, helping to verify that the software meets requirements. Test cases are also reusable, empowering future testers to utilize them to perform the tests again independently.
Test Plan: Where a test case is scoped to a particular testing situation or a specific aspect of a product’s functionality, a test plan is a significantly more comprehensive and overarching document designed to capture the information required to cover all aspects of testing the software. That information includes: the test strategy, the scope and objectives of the test, the test schedule (including start and end dates), relevant estimations, applicable deadlines, and the resources that will be required to complete the work. It is a plan, controlled by test managers, that aligns organization-wide expectations with what actually happens as testing is performed, in order to validate that the software is functioning as intended.
Test Script: In some circles, the terms test script and test case are virtually interchangeable, because they both describe the actions that test a software element’s functionality. However, in the automotive arena, the term test script is typically used in the context of automated testing, in which a machine does the testing. In other words, developers write test scripts to be machine-readable, as opposed to test cases which are written to be interpreted by the humans who are performing manual testing.
It is helpful to keep in mind the cause-and-effect order in which the various test-related elements are created.In the automotive industry, requirements shape the test cases. The way the test cases are written drives how the code will be programmed and how the scripts will be written. In turn, the scripts and programming constitute the test. In other words, the requirements define everything that takes place downstream, and everything downstream should fall within the scope of the requirements—no more and no less.
The goal of testing is to run the test subjects through a scenario in an accurate and realistic manner within an automated computer-based simulation environment, testing to the requirements. Therefore, the requirements themselves must be specific, unambiguous, and measurable. The cleaner the requirements, then the clearer and more concise the test cases, the more efficient the test scripts, and the more economical and trustworthy the tests.
Optimally, test cases should be written early in the software development lifecycle—specifically, during the phase when requirements are gathered. Test cases are defined by these requirements. Therefore, as they are writing the test cases, the tester should refer often to the requirements and use case documentation, as well as to the overall test plan.
Test cases should be written in a clear and concise manner and should take into consideration any relevant application flows. They should also be kept economical and easy to execute on a high level. This precautionary effort will reduce the maintenance burden when the application inevitably evolves.
There are certain characteristics and best practices that are particularly important in regard to all test scripts:
A test case is a set of instructions on how to validate a particular test objective. It has components that define an input, action, and then an output (expected result), to determine if a given feature in the application is working correctly.
Developers’ building inputs and outputs must define and document fundamental considerations:
Additionally, certain inputs and outputs could require that a unit fall within a range value—for example, +/-5—and that the test case be able to accurately capture that value and write it within those limits.
Developers should keep in mind a test case’s potential reusability, which is usually higher when inputs and outputs do not have very strict limits. To help leverage this, developers typically write target values as variables. These variables can then be reused for test cases that are repetitive. Clients will often have projects that involve their own library of test cases and scripts for subjects closely related within their own product lines. Sometimes those projects overlap, with a significant portion of the components in related products being interchangeable.
For example, imagine a team of developers working a project revolving around an engine oil pressure system, with one engine intended for desert use and the other for arctic conditions. Most of the components of these two closely related engines are the same, with only the parameters being different, depending on the application. The parameters would get rewritten at a project-level basis, allowing room for adjustment. By employing variables rather than hard-coding parameters into the testing, this balances reuse while enabling customization.
After the test is planned out, the code can be programmed. Developers use an internal tool to create automated test requirement parameters, cases, expected outcomes, and plans, but test engineers are still closely involved in overseeing each step and writing the test script itself. Some aspects of the script can be automated—defining variables, inputs/outputs, etc.—but writing typically is still done manually.
When initiating test runs at the end, an automated system is utilized. LHP frequently uses National Instrument (NI) software, so coding is primarily done in TestStand and LabView; .NET (dot net), Python, and C programming languages are good alternatives as well. TestStand is efficient because it is a complete test automation engine that is designed to run test scripts, with a feature allowing developers to talk with AI hardware. It is also diverse enough to call programming languages outside of National Instruments software—like Python, C, and LabView—making it a valuable and consistent option for test script execution.
Well-rounded peer review is critical during every step of the process. A key factor in what sets LHP apart from the competition is the different ranges of experience that LHP brings to their partnerships. LHP has embedded engineers, test engineers, functional safety engineers, hardware engineers, and ALM-focused engineers, all supplying a great mixture to the work that is done within these larger projects. Validating code, for example, usually is not a single-person process, and having peer code reviews is an effective way to ensure that validation work is thorough. These teams divide the work into chunks until everything is reviewed, and then they start the testing itself.
Outside of expected failures, the other challenges faced during the test creation process can range from insignificant to extreme. Unless a moment of failure derives from one of the assumed outcomes, it gets examined as an imminent problem. If inconsistencies are found in the coding, typically they are caught early. The coding team uses peer reviews to analyze the code, and then adjusts it until the issues are resolved.
The automotive industry is historically built around model years. However, test script projects generally are not affected by this because most test cases span multiple model years. Cases are usually more program-driven in the sense that these programs have their own multi-year lifespans, and the test cases follow suit. Occasionally though, changes in the program will be caused by a model year, and if these model changes are substantial enough, developers will need to construct a new program. It really depends on the degree of overlap. Some model-year changes are more superficial, affecting mostly body and trim. Others, however, can be the result of a complete redesign. This is where reusability can really pay off.
The test case projects that LHP supports can range from implementing test requirements and placing them in clients’ libraries for reuse, to building test cases from several pages of documentation. Either way, the work LHP performs reduces the workload on their clients, saving them valuable time and resources.
Obviously some projects involve global clients. But in such cases, usually there is a U.S. location these companies utilize to manage most of their communication, thus bridging any time and language gaps with their counterparts around the world.
Within the automotive industry, testing is a valuable service that can be leveraged by companies that want to utilize external sources for testing instead of carrying the burden of supporting in-house testing all on their own. Companies who take advantage of outsourced testing develop a more streamlined and automated process overall, saving them tremendous time and effort.
It should be noted that writing test scripts and performing testing efficiently can require significant growth on the part of the client, both with the internal discipline of learning and adhering to proper internal processes, as well as maintaining timely and robust communication with its partners. If utilizing external sources does become a more frequent option for a company, the test script writing from these outside sources must meet and maintain certain standards. Not every service provider will be able to keep up, so it is imperative that companies utilize proven partners with a verifiable track record of success, like LHP.
It will become more common for companies to migrate from manual testing to automated testing. The timing, however, will vary with each company. Though it is straightforward to gauge and predict the levels of progress that a company can make with the test script writing process, there are several factors that will impact the pace and effectiveness of this transition, including:
For automotive manufacturers to move forward into a safer and more efficient testing environment, there must be growth and maturation within the scope of testing, the coding process itself, and the act of validation. Automated testing has already proven itself to be essential. It is inevitable that the automotive industry will continue to adopt and embrace these critical methodologies.
Writing High-Quality Test Scripts: Taming the Chaos
How to Incorporate Guidance into Process Execution Using ALM
NI Drivven Powertrain Modules, Exclusively sold by LHPTS, an LHP division.