Testim Copilot is here | Learn More

Reflections and Resolutions: Top 5 Lessons Learned in Testing This Year

Here are 5️⃣ lessons I learned or relearned this year in testing: Test design techniques are a great way to…

Reflection and Resolutions-5 lessons
Author Avatar
By Rex Jones,

Here are 5️⃣ lessons I learned or relearned this year in testing:

  1. Test design techniques are a great way to find gaps
  2. AI in software testing is here to stay
  3. Selenium and Playwright are the most popular automation libraries
  4. Codeless and low-code tools are valuable
  5. The great divide between developers and testers

Test design techniques are great for finding gaps

Test design techniques are different from test types, test cases, and test scenarios. A test design technique is a standardized method to develop test cases and gain greater coverage of an application. There are many variants and combinations to achieve better coverage.

Some advantages for test design techniques include:

  • Effective for detecting defects.
  • Independent of defining and executing a test case
  • Elaborates on the test strategy by aligning on what’s needed for test coverage

For testers, test design techniques are crucial to finding gaps in requirements or acceptance criteria. Three examples of test design techniques are Boundary Value Analysis, Equivalence Class Partitioning, and Cause-Effect Table.

  • Boundary Value Analysis – tests the boundary value itself, the value directly below the boundary value, and the value directly above the boundary value.

Let’s say a requirement stipulates 16 is the legal age to drive. A test ensures the application prevents a 15-year-old user from applying, but approves users who are 16 and 17. Why test for 17? The developer can mistakenly code “equal to 16 years old” (= 16) instead of “greater or equal to 16 years old “(>= 16), which would mean only users who are exactly 16 would get approved.

Value below boundary Value boundary Value above boundary
15 16 17
  • Equivalence Class Partitioning – divides the input data into partition classes, so each partition is covered at least one time.

Imagine the acceptance criteria calls for the app to process 1 to 10 orders. Two test cases are developed to trigger orders less than 1 and orders 11 or higher. If a user inputs -7 or 34 orders, for example, an error is generated because both are invalid values. An order of 5, however, is within the range of 1 to 10, so the order processes without error.

Partition 1 (Invalid Values) Partition 2 (Valid Values) Partition 3 (Invalid Values)
0 or Less 1 – 10 11 or Higher
  • Cause-Effect Table – also known as Decision Table Testing, which identifies many possible input and output combinations. The combinations written in a table can help validate scenarios and identify if there are gaps.

 Let’s consider these theoretical requirements: “If a customer is rated Level 1, then the customer is available for 2 free upgrades. If a customer is rated Level 2 or Level 3, then the customer must see a salesperson because they are eligible for 1 free upgrade if they paid their service fees.

Causes Customer is Level 1
Customer is Level 2
Customer is Level 3
Customer paid service fees
Effects Customer receives 1 free upgrade
Customer receives 2 free upgrades
Customer must see salesperson

In the table below, empty slots identify invalid scenarios. Slots with an ‘X’ are valid scenarios and should contain an executable test. The slots with a ‘?’ mean a question should be raised for clarification.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Level 1 Y Y Y Y Y Y Y Y N N N N N N N N
Level 2 Y Y Y Y N N N N Y Y Y Y N N N N
Level 3 Y Y N N Y Y N N Y Y N N Y Y N N
Service Fees Y N Y N Y N Y N Y N Y N Y N Y N
1 Upgrade X X
2 Upgrades X ?
Salesperson ? ? X X X X ? ?

Based on the example, Scenario 1 is invalid because a customer cannot be rated as Level 1, Level 2, and Level 3. Scenarios 2, 3, 4, 5, 6, 9, and 10 are also invalid.

Scenario 11 is valid because the customer is rated as Level 2 and paid their service fees. Scenarios 12, 13, and 14 are also valid scenarios.

Scenario 7 is a valid scenario with a question. It’s valid the customer is rated as Level 1 and paid their service fees, but the “?” is there because the requirement did not mention if a customer rated as Level 1 should see the salesperson.

Scenario 8 has two questions because the customer is rated as Level 1, but did not pay their service fees. Therefore, the tester can ask if the Level 1 customer receives 2 upgrades without paying their service fee. While I personally don’t believe the customer should receive 2 upgrades, we shouldn’t let this opinion cloud the test design.

Scenarios 15 and 16 have question marks because one scenario shows a service fee is paid without being rated as Level 1, Level 2, or Level 3 customer. The other scenario shows an ‘N’ for each slot, which raises the question of whether there are levels outside of the stated three.

AI in software testing is here to stay

Artificial Intelligence (AI) simulates human intelligence in machines, which paves the way for automating of test cases, accelerate testing cycles, preparing test data, analyzing test logs, and many other testing processes. Many organizations are implementing AI to assist with early defect detection, reduce costs, and generate a variety of test types such as regression and API test cases.

With such rapid adoption, it’s important to establish a dialogue with IT and other governing departments. For a deeper dive into AI, check out Revolutionizing Software Testing: The Power of AI In Action.

Originally released in 2004, Selenium is backed by an extensive community. Playwright is backed Microsoft and initially released in January 2020. Both automation libraries are popular in the testing community. Cypress is another contender, but it’s limited because it only supports JavaScript, whereas Selenium and Playwright support multiple languages.

Selenium has the same bindings between each supported language, which means the same commands in Java are also available in Python, for instance. Note that Playwright features more commands and features for JavaScript/TypeScript than other languages.

Both libraries also support multiple operating systems, multiple browsers, and multiple test runners. The below table compares Selenium and Playwright.

Selenium Playwright
Supported Languages Java, Python, C#, JavaScript, Ruby Java, Python, C#, JavaScript, TypeScript
Operating Systems Windows, Mac, Linux Windows, Mac, Linux, Solaris
Browsers Chrome, Safari, Firefox, etc. Chromium, Firefox, WebKit

Codeless and low-code tools are valuable

Codeless and low-code tools are valuable if paired with good test design and testing best practices. Broadly speaking, a codeless tool allows a person without any knowledge of coding to perform a testing function, whereas a low-code tools enables an individual with some coding skills to perform the same functions.

Even professionals who are fluent with code can find value in these tools–and free up time to get back to building their apps. As always, it’s important to understand the features and limitations of these tools.

The great divide between developers and testers

Both developers and testers are crucial to building a product. A developer writes code to create functionality, while a tester determines if the functionality passes or fails per requirements. The development team wants to discover and fix bugs before the software reaches production, so end users won’t run across these defects themselves and switch products or leave negative reviews.

In some organizations, there is friction between developers and testers. Developers blame testers for being a bottleneck, since their release is held up until testing is complete. Testers may cite unrealistic expectations and lack of time allotted for thorough validation. They might also bite back, complaining about how developers taking a long time to deliver code, only to throw it over the wall, half-baked.

There may be some cases where there are flashes of truth on either side of the fence. But heading into the new year, let’s remember we’re all on the same team. How’s this for a resolution? Let’s honor our respective crafts because we’re both invested in a shared goal: delivering a great product.

🧐 What are your top lessons learned in testing this year?
Have a wonderful year ahead! 🪩