As application code has evolved from monolithic to client-server, and to now micro-services, test automation has to evolve as well. Test automation relies very heavily on user interface and web services endpoint layers. That makes the test pass rates low. These layers are flaky due to factors such as data dependencies and inconsistencies, environmental stability, slow execution, and expensive maintenance of the test code.
In a Tiered-Level Test Automation approach, the test code is written to follow a test pyramid popularized by Martin Fowler, where there is a minimal amount of focus given to the user interface and user actions tests. As the test code is written upstream, more scenarios and test permutations are written on the other tier levels of the automation.
Black box testing
Black box testing tests the application in the eyes of the users. For manual testing, the test interacts with the page and verifies that the functionality is working as expected by visually checking the UI components and performing actions on them.
White box testing
White box testing looks at the application code that is subject to test.
public List fetchOverridesFromConfig(IAppConfigContext context) {...} public void applyOverrides(PageDefinition template, List moduleOverrides, ExpClassification expClassification) {...}
Unit testing
Unit testing looks at the smallest testable parts of an application. Test automation should be heavy on unit tests. We implement unit testing by creating tests for individual methods. The technology stack used in our project is comprised of Java, JUnit, Mockito, JSON, and ObjectMapper.
Implementation highlights
Our unit tests mock the dependencies, and pass them to the application method in question to run them through the business logic. Then they compare and assert whether the actual response is the same as the expected response that is stored in a JSON file. The following sample code is a unit test.
public void testFetchTemplateFromRepo() throws JsonGenerationException, JsonMappingException, IOException{ CustomizationSample s = getSample(name.getMethodName()); LookUpConfiguration.RCS_TEMPLATE_CONFIG = Mockito.mock(LookUpConfiguration.class); String templateString = mapper.writeValueAsString(s.getDefinition()); Mockito.when(LookUpConfiguration.RCS_TEMPLATE_CONFIG.getValue( Mockito.any(RCSDictionaryKey.class), Mockito.any(IAppConfigContext.class))) .thenReturn(templateString); PageDefinition defition = helper.fetchPageTemplateFromConfig( mockDataFactory.manufacturePojo(PageTemplateRepoConfigContext.class)); Assert.assertEquals(defition.toString(),s.getOutput().toString()); } public void testPageDefinition(){ CustomizationSample s = getSample(name.getMethodName()); helper.applyOverrides(s.getDefinition(), s.getOverrides(), ExpClassification.ALL); Assert.assertEquals(s.getDefinition().toString(), s.getOutput().toString()); }
Integration mock tests
The strategy and rationale for white box testing is to find out more of the issues and problems upstream. Integration tests can be written to verify the contracts of web services independently from the whole system.
For example, if a configuration system is down or data is wiped out, does that mean the application code cannot be signed-off because the tests are failing? One test implementation is mocking the values from the configuration system, and then using them as the dependencies to run through the integrated business logic and assert whether the actual response is the same as the expected. The technologies used in the mock tests include JUnit Parameterized Tests, MockIto, PowerMockIto, JSON , and GSon.
Implementation highlights
How are the unit tests different from the integration tests? The unit tests run through the individual methods, while the integration tests can call those methods all together.
@Test public void testResponse() throws IOException { PageDefinition pageDefinition = getPageDefinition (rcsConfigurationContext, true, true); new PageDefinitionValidator.Validator(pageDefinition) .validateNull() .pageDefTemplate() .modules(); } private PageDefinition getPageDefinition(RCSConfigurationContext context, boolean customized, boolean includeToolMeta) { ... template = PageDefinitionHelper.getInstance() .fetchPageTemplateFromConfig(context); PageDefinitionHelper.getInstance().populateModuleInput( template, context, includeToolMeta); if (customized) { List overrides = PageDefinitionHelper.getInstance() .fetchOverridesFromConfig(context); PageDefinitionHelper.getInstance().applyOverrides( template, overrides, expClassification); } return template; }
Services endpoint tests
The services layer tests are run against the RESTful endpoints with only a few of the positive scenarios and invalid requests. Then the responses are validated with the expected data values, error codes, and messages. The technologies used are Java, Gson, and Jersey API.
Implementation highlights
Use JSON to store the request and response, and parse and pass them as a data provider into the test methods.
{ "getCustomizationById": [{ "marketPlaceId": "0", "author": "SampleTestUser2", "responseCode": "200" } ], "negative_getCustomizationById": [{ "marketPlaceId": "0", "customizationId": "", "domain": "home_page", "responseCode": "500", "errorMessage": "Invalid experience usecase." },... public void testPageDefinitionService(String url, Site site, String propFile, JSONObject jsonObject) throws Exception { // Build the context builder from JSON ... // Create Customization Response createCustomizationResponse = CustomizationServiceClientResponse .createCustomization(url, propFile, jsonObject.getString("author"), jsonObject.getString("experienceUseCase"), xpdContextBuilder, xpdModuleOverridesList); // Get the customization By context String customizationId = CustomizationServiceClientResponse .getCustomizationIdByContext(url, propFile, jsonObject.getString("experienceUseCase"), xpdContextBuilder); // Validate if the module override is applied on pageDefinition new PageDefServiceResponseDataValidator .Validator(response, xpdModuleOverridesList) .moduleData() .moduleLocators();
UI Tests
User Interface test automation is minimal and focuses on actions on the page, for example, editing and saving the page after making the customization changes. The technologies used are Java, Selenium WebDriver, HTTPClient, JSON, and Gson.
Implementation highlights
The UI tests iterate through a collected list of web elements and actions, rather than individually taking each locator for each row and column. The save flow tests also integrate to the service response to verify against the source of truth. We are validating the data that is passed through the UI against the service response and comparing them to determine whether they both are equal. This strategy helps to validate more data on the fly rather than hard-coding the expected output.
The functional flow is also verified through the integration with services. For example, the edit and save flows are tested by getting the service response, using it as the source of truth to validate the data that is saved, and verifying whether they are equal. Once again, this approach helps to validate more data on the fly rather than hard-coding the expected output in the test class or in some properties file.
public Validator save() { SaveServiceAPI saveService = new SaveServiceAPI(); saveCustomization.clickCancelEdits(); saveCustomization.clickRestoreDefaults(); saveCustomization.editModuleBeforeSaving(content(propFile, "SAVE_ON_TITLE")); saveCustomization.clickSave(); assertChain.string().equals(content(propFile, "SAVE_ON_TITLE"), saveService.getModuleTitle(content(propFile, "SAVE_REQUEST_URL")), "Saved Title in the UI doesn't match with the Title in Service response"); return this; }
Conclusion
Discovering issues upstream is more efficient and less expensive than finding them when the product is already developed and in production. The Tiered-Level Test Automation approach encourages developers to think and sets an example for where and when it is best to test the product.
Implementing the tiered test automation was indeed a collaborative effort. Thanks to my colleagues, Kalyana Gundamaraju, Srilatha Pedarla, Krishna Abothu, and Manoj Chandramohan for their contributions to this test automation design.