Monday, November 27, 2017

Expected Conditions and Interactions

Getting stable and reliable Web test cases is the goal for any automation engineer. Part of providing test reliability is to check for a particular UI state before attempting to interact with the application further. For example checking that a button is visible before trying to click on it.

Seleniums ExpectedConditions are used to make the test case wait until a certain user interface state is reached before continuing. For example:

  • Wait until an Element is Clickable
  • Wait until an Element is has a specific Text value
  • Wait until an Element is Visible
  • etc.

Individual conditions can then also be combined with AND and OR operators to create more complex conditions, for example wait until an element is Visible AND has a specific Text value.

Checking and waiting for a certain application state is a good approach, it provides a checkpoint in the test to know that the test code and the application are in sync with each other.

However in dynamic websites, user interface states can change, and a particular state may only exist for a short period of time before it changes. In these circumstances when a certain state is reached, it is not guaranteed to be the same state when the test's next step is to interact with the application.

For example, lets assume that we want to click on an button. We first use ExpectedConditions to wait for the element to be visible and clickable, once this state is reached we use findElement to obtain the element and call the click method.

public class ClickButtonTest {

private WebDriver webDriver;
private static final By buttonLocator = By.cssSelector("button");

@BeforeClass
public void setupBrowser() {
webDriver = new FirefoxDriver();
webDriver.get("http://the-internet.herokuapp.com/login");
}

@Test
public void clickLoginButton() {
WebDriverWait wait = new WebDriverWait(webDriver, 5);
wait.until(ExpectedConditions.visibilityOfElementLocated(buttonLocator));
WebElement button = webDriver.findElement(buttonLocator);
button.click();
}

@AfterClass
public void closeBrowser() {
if (webDriver != null) webDriver.quit();
}

}


Here the clickLoginButton method looks quite safe, in that we wait for an expected UI state to be reached, and when reached (up to a max of 5 seconds), we continue to find the element and click it.

The problem can be in dynamic websites is that there is a time delay from the ExpectedCondition being satisfied and the click method being called, the application may not be in the same state, leading to an Exception from either findElement or the click method.

To solve this we need to combine the interaction (i.e. click) with the ExpectedCondition so that they are considered one transaction, either both work or both don't work.

This is where we can add an Expected Interaction and combine it with the Expected Condition.
First we need to add our expected interaction of click:


public class ExpectedInteractions {

public static ExpectedCondition<boolean> wasClickedBy(final By locator) {
return new ExpectedCondition<boolean>() {
public Boolean apply(WebDriver webDriver) {
try {
webDriver.findElement(locator).click();
return true;
}
catch (Exception e) {
return false;
}
}
};
}

}


And now we can use this within the test case, by combining the expected condition and the interaction together, as show below.

@Test
public void clickLoginButton() {
WebDriverWait wait = new WebDriverWait(webDriver, 5);
wait.until(ExpectedConditions.and(ExpectedConditions.visibilityOfElementLocated(buttonLocator), ExpectedInteractions.wasClickedBy(buttonLocator)));
}


Hope this article was useful!

Thursday, July 6, 2017

Deciphering Selenium Grid Configuration

Why Grid Configuration is Important

Many websites give articles on how to quickly get a selenium grid up and running, but very few go in depth of the different configuration options the grid hub and node have that can affect the reliability of the grid. It was after I encountered problems myself with the default configuration I decided to dig a little deeper. It was here that i found that no quick reference was available! Ultimately i had to dig into the Selenium code on GitHub to find the answers.

Below is a sharing of what I found, and a few tips others may find interesting.

Hub Configuration

The following shows the default configuration for the Hub
This can be found in GitHub at : Here

Default hub configuration
port
The port number the hub will use.

newSessionWaitTimeout
Specified in milli-seconds, the time after which a new test waiting for a node to become available will time out. When that happens, the test will throw an exception before attempting to start a browser. An unspecified, zero, or negative value means wait indefinitely.


When trying to execute more tests that you have grid nodes, the default value of waiting indefinitely will hang the test cases. To avoid that, set this to the execution time of your slowest test case!



servlets
List of extra servlets the grid  hub will make available. The Jetty web server that the grid hub is using is able to load other services that the hub itself.

withoutServlets
 A list of default servlets to disable. Advanced use cases only. Not all default servlets can be disabled.

custom
A comma separated key=value pairs for custom grid extensions. NOT RECOMMENDED -- may be deprecated in a future revision. Example: -custom myParamA=Value1,myParamB=Value2

capabilityMatcher
Name of a class implementing the CapabilityMatcher interface. Specifies the logic the hub will follow to define whether a request can be assigned to a node. For example, if you want to have the matching process use regular expressions instead of exact match when specifying browser version. ALL nodes of a grid ecosystem would then use the same capabilityMatcher, as defined here.


When the hub is looking to create a new session, it first has to find a node that has the same capabilities as the desired capabilities of the new session. The capability matcher performs a check to see which node passes the criteria. The default capability matcher will match nodes based on their Platform, BrowserName, BrowserVersion only if the node provides it.


throwOnCapabilityNotPresent 
If true, the hub will reject all test requests if no compatible node is currently registered. If set to false, the request will queue until a node supporting the capability is registered with the grid.

cleanUpCycle
Specified in millisecond, defines how often the hub will poll running nodes for timed-out (i.e. hung) threads. Must also specify "timeout" option.

debug
When set to true, enables extra logging.

browserTimeout
Specified in seconds, defines the number of seconds a browser session is allowed to hang while a WebDriver command is running (example: driver.get(url)). If the timeout is reached while a WebDriver command is still processing, the session will quit. Minimum value is 60. An unspecified, zero, or negative value means wait indefinitely.

timeout
Can also be specified as sessionTimeout 
Specifies the timeout before the server automatically kills a session that hasn't had any activity in the last X seconds. The test slot will then be released for another test to use. This is typically used to take care of client crashes. cleanUpCycle must also be set.

Node Configuration

The following shows the default configuration for the Node
This can be found in GitHub at : Here

Default node configuration

capabilities
Where the different capabilities of the browsers installed on the node are defined and how many instances of that browser can be open at once on the node.


Additional capabilities can be added, and then a custom capability matcher can be added into the Hub to match on them.



proxy
This defines what proxy (Class) gets instantiated in the Hub when the node registers. By overriding the default proxy, custom behavior can be added into the grid hub for this particular grid node. For example this class can override the beforeSession and afterSession methods on the hub to perform custom actions. Combine this with servlets on the node, and the new proxy on the hub can call the new servlet on the node to execute custom commands on the node, for example a task kill after a session is completed.

maxSession
The max number of tests that can run at the same time on the node. Often confused with maxInstances in the capabilities section. For example we could confige the node to have IE, Chrome and Firefox with maxInstance=1 and then have maxSession=3 so that they can all have one instance at the same time.

port
The port number the node will use.

register
If set to true, the node will attempt to re-register itself automatically to the hub if the hub becomes unavailable.

registerCycle
In milli-seconds, specifies how often the node will try to register itself again. Allows administrator to restart the hub without restarting (or risk orphaning) registered nodes. Must be specified with the "register" option.

hub
The url that will be used to post the registration request. This can also be specified via hubHost and hubPort settings, however this setting take precedence.

nodeStatusCheckTimeout
In milli-seconds, the connection/socket timeout, used for the node "nodePolling" check.

nodePolling
In milli-seconds specifies how often the hub will poll to see if this node is still responding.

unregisterIfStillDownAfter
In milli-seconds, tells the hub that if this node remains down for more than the specified time, it will stop attempting to re-register from the hub.

downPollingLimit
The node is marked as "down" in the grid console, if the node hasn't responded after the number of checks specified in this parameter.

debug
When set to true, enables extra logging.

servlets
List of extra servlets the grid  node will make available. The Jetty web server that the grid node is using is able to load other services that the node itself.

withoutServlets
 A list of default servlets to disable. Advanced use cases only. Not all default servlets can be disabled.

custom
comma separated key=value pairs for custom grid extensions. May be deprecated in a future revision. Example: -custom myParamA=Value1,myParamB=Value2

Other Settings

While looking around in GitHub I came across some other settings not in the default configuration example files, these are:

For Hub:
  • prioritizer : class name : a class implementing the Prioritizer interface. Specify a custom Prioritizer if you want to sort the order in which new session requests are processed when there is a queue. Default to null ( no priority = FIFO )
For Node:
  • id : optional unique identifier for the node. Defaults to the url of the remoteHost, when not specified.

Tuesday, June 13, 2017

Pragmatic Test Case Management

Having recently worked on a waterfall project where 600 test cases were documented and 200 defects logged, I wondered at the end of the project just how much time and effort had been spent in creating this documentation and of what use it is when the project was delivered. I asked the tester involved, how many of the 600 test cases were of high importance that they would be used as regression tests in the future. I was surprised by the answer, that only approx. 30 of the test cases could be used to validate the projects features in future regression campaigns. To me this means that stunningly 95% of the effort spent in test documentation was of no value in the future. I also asked if any project member or senior manager was looking for test case traceability or test/defect metrics, the answer was again a surprising No. I therefore felt what was the point of all this documentation, and surely couldn't a more pragmatic approach be employed.
So as I and the tester involved move onto our next project, which is using Agile, I wanted to share some guiding principles to reduce the documentation burden. Below are my thoughts.
Only document test cases that you will execute in the future
What this means is that if a test case is validating part of a feature, but will not be used in future regression campaigns, then yes for sure run that test - but do not document it! If your never going to execute it again that test case simply becomes a statistic, and if no one on the project is looking for statistics, its a waste of your time. An example of this can be checking that a web page is using certain fonts, yes its a good test when validating that requirement, but is it important to keep including in each regression test? I beg to say No it isn't.
Test cases are only kept up to date if they are executed regularly
A test that has been documented but for one reason or another has not been executed for some time gets out of date. The application changes, and no tester is re-executing that test and updating its documentation. Multiply that by hundred or thousands of tests and you have an entire test library that is ageing, losing value and increasing the debt on testers. Further when automation testers turn their attention to these tests they run into problems that the automation engineers don't have exact test definitions to work from and a whole review of updating the documentation has to be done. This leads to:
Delete test cases that have not been executed for more than X releases
If we are really only documenting the important tests, then the real litmus test of their importance is if they are actually included in future regression campaigns. If a test has not been included in the regression campaigns of X releases (or alternatively Y sprints, or Z months) then the test really isn't as important as originally thought, and can be safely deleted or archived.
Conversations replace defect reports
I often say to testers don't just log defects and expect them to be fixed, have a conversation first. In a pure pragmatic setting, the conversation itself can be used to replace the defect report. Promoting collaboration, all defects observed should be discussed between developers and testers first. Really defect reports should only be used in cases where the developer is busy, and needs a mental reminder to come back to that conversation later.
When a test is automated the test documentation can be deleted
To me the only documented tests are one that require manual execution. Automated tests become the documentation for those tests. With this in mind there is no need to spent time keeping the automated test and its associated documentation both up to date, the automated test IS the test case, and only it needs executing and maintaining.
When I presented these idea's the reaction was some rather confused looking testers! To traditional testers NOT writing tests, NOT writing defect reports and DELETING test documentation seems somewhat heart breaking, and taking away from them any measure of their performance. Gone will be the days when a tester can report I wrote 600 test cases and logged 200 defects. My reply to this is that in an Agile setting, writing documentation and of course finding defects reduces the teams velocity. We only need to measure the output of the team, when this take's a dip a sprint retrospective can give more answers than pure statistics. All agile team members get measured by their collective output and the quantity of production defects.
I'd be interested to hear of other's who have employed more pragmatic approaches to test and defect management.

Expected Conditions and Interactions

Getting stable and reliable Web test cases is the goal for any automation engineer. Part of providing test reliability is to check for a par...