Software development is burdened with the risk that a project will not work and/or look the way it was planned – regardless of whether it is an Internet service or a desktop/mobile application.
Technological advancement and intense expansion of mobile devices market pose increasingly more challenges to developer teams. This is caused, for example, by a multitude of available browsers, operating systems (including their versions) and diverse screen resolutions. As a result, testing even a part of the potential configuration is a huge challenge.
Access to the most popular tablets and smartphones and virtual machines with the appropriate OS are – at least theoretically – an antidote to that situation. In practice, however, the first doubts arise once costs of devices, licenses, service, etc. are calculated. Moreover, the availability of the devices should also be taken into account. I remember the times when I used to line up with the other testers awaiting access to a smartphone of the specific type, in order to verify the functionality of the application.
Alternative to an appliances store
Here cloud-based testing platforms come in handy. Browserstack, CrossBrowserTesting or Sauce Labs are testing platforms, through which we can verify correctness of operation of websites or mobile applications on various operating systems, browsers and real mobile devices. How does it work? In short: after logging in to one of these sites, you can test hundreds of different configurations using an online browser. Sound perfect, doesn’t it?
I have to admit that I used to be sceptical until three years ago. Especially after the situation in which Internet Explorer 10 showed the correct behaviour of the tested application, but having a local equivalent on a virtual machine, I could see something completely different. And unfortunately it wasn’t an isolated case – emulated mobile views didn’t work. With time, however, these services were being gradually improved and their reliability increased significantly, followed by their usefulness. First of all, we gained access to mobile devices, using which we can physically test our projects. Previously, there were no physical devices on the Browserstack-type platforms, only emulators, which left much to be desired. The case looks similar with Internet browsers and operating systems. Last year I did not encounter a situation in which there were differences in the functioning of applications on native hardware and those made available by the aforementioned online solutions.
So what is the catch? Nothing is free of charge: you need to choose one of several available subscriptions in order to take full advantage of the services offered. However, each of these services allows you to start a free trial to a limited extent.
What are the most interesting functionalities offered by these services? Let’s check them!
Since I use Browserstack most often, this tool will be a reference point for me.
One of the basic functionalities is logging in to the virtual desktop to perform tests from the browser level using the previously defined configuration. For this purpose, you should indicate the desired operating system, and then the browser and its version. The choice is really broad, for example, we can choose from the following operating systems:
- High Sierra
- El Capitan
- Muntain Lion
- Snow Leopard
Internet browsers are also available in various versions: Internet Explorer, Edge, Firefox, Chrome, Opera, Yandex, and Safari – of course in accordance with the OS indicated above.
The preparation for tests on mobile devices looks similar – first, we indicate one of the following operating systems:
- Windows Phone
Then we can select one of the devices from the following, quite extensive, list:
After determining the parameters we are interested in, we get a desktop with an open web browser, where we can perform the activities exactly as in a local browser. In addition, the screen displays a panel with access to such options as changing the resolution of the generated desktop, switching to another browser, changing the location, as well as making a screenshot and reporting a bug (for example through Jira).
Another functionality is a support for automated tests, during which the scripts we prepared are run on pre-defined browsers of the selected portal.
The configuration is simple: when creating a WebDriver object, we use RemoteWebDriver, where we pass data to our Browserstack account as parameters. Through capabilities we specify the operating system which we want to use, together with the browser and its version, as well as all issues related to the acceptance of certificates or screen resolution. The service has well-prepared documentation and manuals, which can prove helpful in the event of any problems.
In the Automate section, you will find detailed information about the tests already performed: platform, browser, time of execution, logs and video recording of the test. It is worth mentioning that using the Browserstack API, we can provide additional data, such as the title of the test and its status (in the case of failure, we can also include information about the reason for this).
If we run parallel tests, they will also be supported– provided that our account has the appropriate package (following the rule: the more expensive, the more options available).
What if the access to our project is secured in the internal network and we need a configured VPN to use it? No problem: just download the appropriate files of the portal’s provider (at least this is what it looks like in Browserstack), and after running them with the appropriate parameters, add one capability in the test code. This way, all traffic is transmitted through our private network.
What about such CI tools as Jenkins? It cooperates without causing any problems. I had the opportunity to work in a project where a set of Java + Selenium WebDriver + BDD (Cucumber) + Browserstack was used. The result: everything worked as planned, both locally and from Jenkins’ level.
Apart from testing websites, mobile applications verification is also allowed in the above-mentioned Internet services.
We can install them on real devices and then focus on manual testing. In addition, there is a module for automating tests of mobile applications. In case of Browserstack, we may currently use the following frameworks: Appium, Espresso, XCUITest, and EarlGrey. As far as the devices to choose from are concerned, we have here such novelties, as e.g., Samsung Galaxy S10, iPhone XS, but also some older models of smartphones and tablets. Altogether we can choose from about 70 mobile devices.
Another frequently used functionality is taking screenshots on operating systems and web browsers of our choice. Thanks to this, we can quickly check what our website looks like on different platforms, without the need to launch each of the remote desktops separately.
What are the drawbacks?
Let’s examine a few issues in terms of the weak sides of the discussed solution.
In order to be able to use all of the functionalities fully, it is necessary to have the most expensive package and determine how many people can use the service at the same time (of course, the more users, the higher costs). The producer has prepared a few options to choose a subscription that meets our needs. However, you should be aware that – as I mentioned before – the costs of purchasing devices, licenses, and maintenance would be much higher than the costs of purchasing access to paid online solutions.
If we test web pages filled to the brim with scripts and graphics, there is a risk that the discussed solutions will start to have annoying slowdowns in loading pages, the cursor will stop moving smoothly and the image itself will become blurred. What’s worse, it can also happen in the case of small websites. It is hard to say what the reason for this state of affairs is. Perhaps it is caused by the server load. Although in some browsers (e.g. Internet Explorer or Safari) there is a message with a proposal of installing an add-on, which should improve the comfort of testing, but it does not fully solve the problem. Automated tests can also last longer. Compared to a local machine or Jenkins -> Node configuration, this process requires about 15 percent more time. You have to answer for yourself whether we have permission for that, or it is too long for us.
If we want to keep our project secret, the choice of such tools may not be the best decision, because information about the tests is stored on the aforementioned dashboard. At the same time, the ability to access the internal network through an external portal also makes us more ‘open’ to the world, which is not necessarily a desirable state.
The lack of a large preview view for performing our test is a huge inconvenience. For example, in the case of Browserstack, it means that if you stop the script, you can’t see where you are. What is more, the timeout settings in this service is ~60-sec set. If you do not pass any command within this time period, the test will be aborted. So we are dealing with debugging in a rush. This problem was handled slightly better by Sauce Labs, but it is not perfect in this respect either.
Browserstack, Sauce Labs or CrossBrowserTesting?
The functionalities of these services are very similar to each other. I described Browserstack earlier, but what about the others? Sauce Labs is positively distinguished by the fact that it has solved the issues of previewing the currently performed test, as well as comparing changes in the interface on the basis of screenshots (on the basis of: pattern – current status) much better.
CrossBrowserTesting, on the other hand, describes its capabilities in the following way: "Create Automated Tests Faster Than Ever with Codeless Record and Replay." Personally, I feel it’s a bit like Selenium IDE on steroids and would be sceptical about the effectiveness of self-sustaining tools. I’m sure it’s at least interesting.
So there is something to select from, but what to choose? I recommend to set up trial accounts on these portals and assess which of the tools best suits our needs.
A few words to end with…
Over the last 2.5 years, each of the above-mentioned services has made good progress in various areas, such as:
- eliminating problems with false/positive errors
- abandoning emulation in favour of real devices
- expanding technical support and documentation
- adding new functionalities
All this makes it worth investigating them and considering possible cooperation. In my opinion, if we have very dispersed requirements for the operation of our application in different configurations, these portals will certainly be helpful then. If we focus on a narrower range of solutions used, it is worth to calculate the costs to consider whether the native tools would not work better in this case.