Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
🍂Introduction
2.
🍂About PerformanceTestRunner
2.1.
🌵Location
2.2.
🌵Test Running Specifics
2.2.1.
Loading script libraries
2.2.2.
Workspace-related scripts
2.2.3.
Running on headless machines
2.3.
🌵Get the generated command Line
2.3.1.
Via the Get Command Line option 
2.3.2.
Via the runner launch 
3.
🍂PerformanceTestRunner GUI
3.1.
🌵Running the Utility
3.2.
🌵Parameters
3.2.1.
Basic Tab 
3.2.2.
Overrides
3.2.3.
Reports 
3.2.4.
Export Statistics
3.2.5.
Properties
4.
🍂Command-Line Arguments
4.1.
🌵General Syntax
4.2.
🌵Required Arguments
4.3.
🌵Optional Arguments
4.4.
🌵Examples
5.
🍂PerformanceTestRunner Exit Codes
6.
Frequently Asked Questions
6.1.
What is automated testing?
6.2.
What is test runner Ready API?
6.3.
What are the utility parameters for Performance TestRunner GUI?
6.4.
What are the Performance TestRunner Exit Codes?
7.
Conclusion
Last Updated: Mar 27, 2024

How to Automate the test runs in Ready API?

Author Rashi
0 upvote

🍂Introduction

Before releasing software into production, automated testing ensures that it is properly functioning and meeting requirements. This software testing method employs scripted sequences that are executed by testing tools.

Manually repeating these tests is both costly and time-consuming. Once created, the automated tests can run indefinitely at no additional cost and are much faster than manual tests. Automated software testing can reduce the time it takes to run repetitive tests from days to hours.

How to Automate the test runs in Ready API?

Ready API makes it simple to automate test runs into your DevOps or Agile development workflow. With integrations down the pipeline, you can store your test cases in a Git repo, commit new code, and have your CI server run those stored tests during every build on nearly any environment, including Docker.

This article will show you how to start your load tests automatically.

Recommended Topic, Types of Agents in Artificial Intelligence.

🍂About PerformanceTestRunner

You can use the test runner to run functional tests and export the results. The runner can be started from the command line or the ReadyAPI user interface. When you need to build the command line and test the settings, the latter approach is useful.

The main concept to automate test runs in Ready API is PerformanceTestRunner. It executes load tests and exports the results as specified. You use it from the command line to run Ready API load tests.

How to Automate the test runs in Ready API?

The runner can be started from the command line or the Ready API user interface. When you build the command line and test the settings, the latter approach comes in handy.

🌵Location

The runner can be found in the directory <Ready API>/bin. The file is called loadtestrunner.bat (Windows) or loadtestrunner.sh (Mac, Linux, and macOS).

🌵Test Running Specifics

Loading script libraries

Every 5 seconds, Ready API, checks the folder specified in the Script Library folder. If you have set the script library folder dynamically, you must wait for it to load before you can use it. To accomplish this, add the following delay to your script after you tell it to load the script library:

project.scriptLibrary = project.getPropertyValue("dataPath")

Thread.sleep(12000)

Workspace-related scripts

Because Ready API does not open a workspace when you run tests with the command-line runner, the getWorkspace() method returns null. To avoid using the workspace object, you must change your script. Try loading another project from a file, for example, if you use the workspace object.

import com.eviware.soapui.impl.wsdl.WsdlProjectPro

WsdlProjectPro project = new WsdlProjectPro("Full path to the project file")

Running on headless machines

If one wants to run a test on a headless machine, use the JVM option -Djava.awt.headless=true. It disables the test runner's interaction with UI elements. Modifying JVM Settings explains how to set the option.

🌵Get the generated command Line

Via the Get Command Line option
 

  • Enter the required parameters in the dialogue and click Get Command-Line.
     
  • Ready API will generate the command line and display it in a separate dialogue.
     
  • To copy the text to the clipboard, select it and then click Copy to Clipboard.
     

Via the runner launch
 

  • After you've specified the desired launch parameters in the dialogue, click Launch to start the runner, and you'll see the test box.
     
  • After the run is complete, locate the generated command line at the beginning of the test log, select the text in the window, and copy it to the clipboard.

🍂PerformanceTestRunner GUI

To automate test runs in Ready API, Performance TestRunner GUI is used. The various steps and the parameters required are in the sections below.

How to Automate the test runs in Ready API?

🌵Running the Utility

Right-click a load test in Navigator and select the launch command from the context menu to launch the runner from Ready API. Ready API displays a dialogue box where you can configure the run parameters after you select a menu command. The generated command string can then be used to run the utility from the command line.

🌵Parameters

To automate test runs in Ready API, the following utility parameters are displayed in the configuration dialogue box:

Basic Tab
 

  • LoadUI Test: Defines the load test to be performed. This parameter corresponds to the -n command-line argument. If you do not specify this parameter, the runner will run all load tests in your project.
     
  • TestRunner Path: The fully-qualified name of the runner file is specified (loadtestrunner.bat or .sh). The file is usually found in the <Ready API>bin directory.
     
  • Save Project: Before running the test, use the Ready API command Save Project to save the test project.
     

Overrides

Parameter

Description

Environment It defines the environment in which a test will be run. The Environment drop-down list displays the environments that you specified in the Environments dialogue.
Arriving/Base VUs

It indicates the number of virtual users to be used in simulation:

  • If you use the Rate load type in your test, this parameter takes precedence over the Arriving VUs setting.
     
  • If your load test uses the VUs load type, this parameter overrides your load test's VUs setting.
Test Duration Limit It specifies the maximum test execution time in seconds. The limit is not set if the value is zero or empty.
Target Number Limit It sets the maximum number of runs for each target (test case) in your load test.
Failure Limit It specifies the maximum number of error messages that can be sent. If the logged error messages exceed this limit, the test execution is terminated.
Agents It defines the remote agents that will be used during the test run. The Agents parameter is only relevant if Local Mode is disabled. Distribution is disabled when Local Mode is enabled.
Local Requests If you enable this parameter, the runner will run all test cases on your local computer. Otherwise, the test cases will be distributed among the test agents by the runner.
Abort It specifies how the test runner will handle requests that are currently running when the test is stopped. It can be f (the default) or t.

 

Reports
 

  • Create Reports: Requests the runner to save reports to the directory specified by the Root Folder parameter.
     
  • Root Folder: The fully qualified name of the directory where exported test reports will be stored. A directory will be created if the specified directory does not exist.
     
  • Report Format: This property specifies the format of generated reports. Default, PDF,RTF, CSV, XLS, HTML, TXT, and XML are all options.
     
  • Statistics: Indicates which statistics groups should be included in the report. By default, the report includes all statistics groups.
     
  • JUnit-Style Results: Instructs the runner to produce a JUnit-style report.
     
  • Group results: Instructs the runner to group JUnit-style results according to assertion types. The results are organized by test, scenario, or target level if this option is not selected.

Export Statistics

Using an Export Statistics tab, you can save the data from statistics groups (charts) to.CSV files. In the corresponding field, enter the fully-qualified name of the target file for each statistics group you want to export. Other data sets will not be exported. The specified file will be overwritten if it already exists. The Export Statistics tab's parameters correspond to the -e, —export command-line arguments.

Properties

The Properties tab specifies the values of global, system, and project variables used during the test run. The values you enter will override the variables in other Ready API dialogues and panels. To specify variable values, use the variable-name=value in string format. Separate the multiple name=value pairs with spaces, or start a new line for each pair. Enclose the entire pair in quotes if a variable name or value contains spaces.

🍂Command-Line Arguments

You can visually configure the command line by running the utility from the user interface to automate test runs in Ready API.

🌵General Syntax

The format of the runner command line is given below:

loadtestrunner.bat [optional-arguments] <test-project> -n<load-test-name>

🌵Required Arguments

test-project

The fully qualified project path contains the load tests to be run. If the file name or path contains spaces, enclose the entire argument in quotation marks.

Following are the example:

C:\Work\readyapi-project.xml
C:\Work\composite-project

-n<load-test-name>

It specifies the load test to run (usage -n<test-name>). If the test name contains spaces, enclose the entire parameter in quotation marks.

Example:

-nLoadTest1

🌵Optional Arguments

-a “<args>, or --agents <args>”

The remote agents that will be used for the test run are specified. The following syntax is used to specify an agent:

-a "<ip>:<port>[=<scenario1>[,<scenario2>...]]"

ip – IP address or the computer name of the agent.

port – The port number which is used.

scenario –The list of scenarios to be simulated on the agent, separated by commas. Ready API will run all scenarios on the specified agent if this parameter is not specified.

Example:

-a "127.46.44.12:80=Scenario1"

 

-A<args>, or --abort <args>

Specifies whether the runner terminates running requests when a test is stopped. The argument can be either f or t:

If the mentioned argument is t, the ongoing requests are canceled, and their results are excluded from the overall test results.

If the mentioned argument is f, or is not specified, the test will run until all ongoing requests have been completed; the request results will be included in the test results.
 

-D<args>

For the test run, specifies the value of a system property. During the run, the specified value will precede the variable value.

Usage: -D<variable>=<value>. If the value mentioned includes spaces, enclose the entire argument in the quotes. To override various variable values, specify the -D argument many times.

For example:

-Dtest.history.disabled=true

 

-e<args>, or --export <args>

It commands the runner to export data of statistics groups to .csv files.

Usage: -e<FileName>=<StatGroupName>. FileName is the full name of the target file. (If an existing file is specified, it will be overwritten.) StatGroupName is the name of the exported statistics group. The following names can be found on the Statistics page of the load test editor:

Names on the Statistics page:

If the file name or the group name contains spaces, enclose the entire argument in quotation marks.

Use the -e argument multiple times to export multiple statistics groups.

For example:

"-eC:\Work\statistic.csv=New Statistic Group"

 

-F<args>, or --format <args>

It specifies the format of the exported reports. 

Usage: -F<Format Name>. Supported formats include: HTML, RTF, CSV, PDF, XLS,  TXT and XML.

Always specify only one parameter for the argument mentioned.

For example:

-FXML

 

-G<args>

It specifies a value of the global property for the test run. The value specified will override the variable value during execution.

Usage: -G<variable>=<value>. If this value includes spaces, enclose the entire argument in quotes. To override various variable values, specify the -G argument many times.

For example:

-Gglobal.property=true

 

-h, or --help

It outputs the command description and options it provides.
 

-j

It commands the runner to generate one JUnit-style report.
 

-J

It commands the runner to group JUnit-style results by its assertion types. If this one is not selected, the results are grouped by the scenario, test, or level of the target.
 

-l, or --local

If this argument is written, the runner will simulate load from the user's local computer. Otherwise, it will distribute load simulation among various agents.

This argument overrides "Run Settings..." parameter specified in your load test editor.
 

-L<args>, or --limits <args>

It specifies limits for all test runs.

Usage: ‑L<SECONDS>:<TARGETS>:<FAILURES>

<SECONDS> – It specifies the maximum allowed execution time in seconds.

<TARGETS> – The maximum number of runs permitted for the test cases (targets) used in your load test. The target run counter grows with each test case execution. Each iteration raises the counter if a test case is executed in a loop.

<FAILURES> – It denotes the maximum number of allowed errors.

When these limits are reached, the runner stops the test execution. Zero means the limit is not set.

For example:

-L60:100:20.

This argument overrides the suitable limits for the test run specified in the load test editor.
 

-P<args>

It specifies the value of a project property for a test run. The value specified will override the variable value during execution.

Usage: -P<variable>=<value>. If this value includes spaces, enclose the whole argument in quotes. To override various variable values, specify the -P argument many times.

For example:

-Pproject.property=true

 

-r<args>, or --reports <args>

It commands the runner to generate and save all reports into a specified directory. Usage: -r<directoryName>. To specify the format of the report, use the -F command-line argument. Use the -S argument to include specific statistics data in the main report.

For example:

-rC:\Work\Report

 

-S<args>, or --statistics <args>

It specifies the statistics groups to be included in the report.

Usage: -S<statistic group>. You can find the group names on the Statistics page of the load test editor (see above).

If a group name contains spaces, enclose the entire argument in quotation marks. Use the -S argument multiple times to specify multiple groups. If you leave this argument out, the report will include all the statistics groups listed on the Statistics page.

Example:

-SNew Statistics Group

 

-t<args>, or --timeout <args>

It specifies the amount of time (seconds) that any agent running a test must wait before attempting to reconnect to a controller if disconnected.

Usage: -t<timeout>.

If a connection cannot be established at this time, the agent will halt the test execution. If no agents are used, the option has no effect. If this option is not specified, timeout is set to 10 minutes by default.

🌵Examples

  • The following command executes the MassLoad1 test from the specified test project for 10 minutes and saves the test report to c:test reports directory mentioned below:
loadtestrunner.bat -L600:0:0 "-rc:\test reports" -FPDF "c:\my projects\my-project.xml" -nMassLoad1
  • Any command that follows assigns a value to the file. Separator system variable for the test run executes MyLoadTest from the specified project on the specified agents, and exports statistics for two.cvs files as mentioned below:
loadtestrunner.bat -Dfile.separator=; -a "192.168.0.10:8080=Test Scenario 1" -a "192.168.0.20:8800=Test Scenario 2" "-eMy Stat Group 1" "-eMy Stat Group 2" "c:\my projects\my-project.xml" -nMyLoadTest

🍂PerformanceTestRunner Exit Codes

PerformanceTestRunner uses the following exit codes:

Code

Description

0

The test run finished without errors.

-1

The test run failed.

9

This exit code indicates that there are licensing issues. This value is returned if you run the runner on a computer that does not have a Ready API Performance license.

 

Note: Third-party libraries or the JVM may return exit codes that are not listed here in some cases.

Frequently Asked Questions

What is automated testing?

Manually repeating these tests is both costly and time-consuming. Once created, the automated tests can run indefinitely at no additional cost and are much faster than manual tests. Automated software testing can reduce the time it takes to run repetitive tests from days to hours.

What is test runner Ready API?

You can use the test runner to run functional tests and export the results. The runner can be started from the command line or the Ready API user interface. When you need to build the command line and test the settings, the latter approach is useful.

What are the utility parameters for Performance TestRunner GUI?

To automate test runs in Ready API  using GUI, the configuration dialog organizes the utility parameters in the following tabs: Basic, Overrides, Reports, Export Statistics, and Properties.

What are the Performance TestRunner Exit Codes?

0, -1, and 9 are the exit codes to exit automate test runs in Ready API in Performance TestRunner.

Conclusion

In this article, we have learned how to automate test runs in ready api, and learned about performance test runs with GUI and with commands. With commands, we went through the essential required arguments as well as various optional arguments which can be used to automate test runs in Ready API.

You can refer to our guided paths on Coding Ninjas Studio to learn more about DSA, Competitive Programming, JavaScript, System Design, etc. Enroll in our courses and refer to the mock test and problems available. Take a look at the interview experiences and interview bundle for placement preparations.

Happy Learning!

Live masterclass