Testim Copilot is here | Learn More

Best Practices for Large Test Automation Projects 

Collaboration is key to a successful project!  Working on a group project always raises challenges, but with automation projects, where…

Author Avatar
By Noa Green,

Collaboration is key to a successful project! 

Working on a group project always raises challenges, but with automation projects, where the UI keeps changing and components are shared, collaboration is essential. This is even more true when the projects and teams are large. In this blog, I’ll share some of the best practices we’ve developed from working together with many customers on Testim.

Projects – one vs. many
When you start creating tests in Testim, you will automatically create these tests in a new project. When should you add a new project?

  • If you’re working on similar applications, it’s generally better to work on the same project so you can share components
  • If you’re dealing with different applications and you do not expect to share many components, separate them into different Testim projects

Naming conventions

Aligning the names of all components in the projects is extremely important. It will allow you to be sure everyone can understand the usage of each component and its goal, and will inform the team when and how to use it. Here’s some guidance on naming conventions:

  • Test names: make it clear what the test is testing. For example, which feature or module it’s covering and what action is it testing. 
  • Shared components: name them in a way that makes shared components easy to find. It will increase reuse and you’ll avoid authoring and maintaining duplicate components, which will save you a lot of time!

Shared groups

Shared groups are a great way to increase reuse and minimize maintenance when the application changes. If you fix a problem in a shared group, all dependent tests will be fixed. Here’s some guidance on using shared groups:

  • Before creating new groups, search (using the naming convention) to determine if a similar group already exists!
  • Design groups to maximize reusability. Make sure your groups are flexible and fit different flows and use cases:
    • Use parameters to ensure you can test the same flow with different values, and not just one specific use case
    • Conditions enable your group to fit into more flows! For example, you can add a condition to run a group that activates an account, only if it’s status is “not active”.
      NOTE: Conditions affect all tests using that group. You can put the condition on a parent group to avoid that.

Branches

Working with branches allows different team members to make changes in their tests without affecting the scheduled running tests or shared components. 

  • If you need to update an existing test:
    • Create a new branch (remember to name it well!)
    • Update the test on the new branch
    • Run your test, and other tests that could be affected
    • Go back to master to merge your branch
    • Go over the merge summary
    • Merge!
    • NOTE: The following attributes are shared by branches, changing them will affect tests in different branches: test description, test base URL, test labels, test data, and test config

Tag Failures

When you identify the cause of a test failure, tag it with a reason. This helps ensure you won’t troubleshoot tests that another team member has already checked! Additionally, you will start to build up a history of failure types that can help inform process or infrastructure changes that can improve test success. To make this easier, if Testim identifies comparable failures, it automatically suggests the failure reason for you! 

  • Help your team analyze and track test failures
    • Are you all experiencing environmental issues? This means it will be easier to allocate resources to address the problem
    • Are you finding bugs? Make sure everyone knows!
    • Is there a problem with test logic? Perhaps this is an issue for everyone using a similar group? 
    • Was invalid data the cause? What changed to affect the data? 

Result labels

Result labels are a great way to add additional information to our runs results, for example, using ‘pr-blocking’ or ‘push-blocking’ if this run will be blocking a pull-request or code push. You can even filter your suite of test runs based on the labels. Using these labels can make it easier for other team members to understand which version the tests were running on, why we triggered each run, how urgent are the failures, and more. 

Documentation and conversation

Keeping track of the project outside of Testim can improve your collaboration and the success of your project

  • Create a doc with your naming conventions to make sure everyone’s following the same guidelines
    • Migrating a test list into Testim? Share it with everyone so that they can track the progress make sure there are no duplications
    • Finding cool tricks that improve your tests and are relevant for the application / other members? Share it! 
  • Use a Slack channel or even schedule quick syncs with the team to share your findings and progress with the team
  • Join the Testim Community. If you are wondering what would be a best practice for your specific use case or team, there are members of the Testim Community that might be able to help.  Talk to other testim users and hear what’s working for them!
  • We also have weekly training sessions you can register for to help you grow your skills. 

We hope that you’ve enjoyed these tips and start using them in your projects and tests. Thanks for reading and look for other blogs from the customer success team!