Continuum Scripting Framework

  • Updated

Overview

The Continuum Scripting Framework is a Node CLI that executes Continuum test scripts for Web and Mobile using the Continuum Java SDK. This document describes configuration, test authoring, installation and usage of the framework.

Prerequisites

Framework Prerequisites

  • Node 10.10+
  • JRE 8+ (optional; if necessary, we'll attempt to download and install a local version for you)
  • One or more supported web browsers:

For Android App Testing

For iOS App Testing

App Prerequisites

There are requirements for some Android and iOS applications to be tested with this framework, described below. 

Android Hybrid Apps

The Android platform requires some code to be included in your app in order for Continuum to access the contents of any web views (i.e. instances of WebView) in your app. These changes are detailed by Google here. You'll need to make these changes in order for Continuum to navigate any web views and identify any accessibility concerns inside web views in your app; if you don't want this functionality, no changes to your app are necessary.

iOS Apps

NOTE: The iOS app being tested must be built specifically for the device you want to test with, whether it be a physical device or virtual one. This is a platform requirement enforced by Apple. Please make sure this is the case before proceeding.

Hybrid Apps on Physical Devices

Continuum can access WKWebView and UIWebView elements out of the box on an emulator, however if you are testing against a physical device, additional setup is required. Please consult these Appium docs for details on how to test the contents of web views in your app on a physical device. Unfortunately, accessing SFSafariViewController elements is not currently supported in either case.

Setup

There are references throughout this document to a continuum.json file, which you'll need to create from a template. This template can be found in the node_modules/@continuum/continuum-script-executor/src/main/resources directory of your npm installation after installing this project. You should copy this file and paste it into your own src/main/resources directory from wherever you choose to execute continuum-script-executor, then edit it to meet your needs. For more information, check out the 'Usage' section below.

Mobile

Android

continuum.json

The continuum.json file defines all Continuum-specific configuration for this project.

  1. Set platformName to Android.
  2. Set pathToAppFile to the absolute path of the APK file you'd like to test.

Virtual Device Testing

  1. Set deviceName to emulator-5554.
  2. Set virtualDeviceName to the name of the Android Virtual Device (AVD) you'd like to test against. You can use the Android Virtual Device Manager in Android Studio to create or otherwise get the name of an AVD to use. If your desired virtual device name has spaces in it, use underscores instead.
  3. Set physicalDeviceId to null.

Physical Device Testing

If you'd like to test against a physical Android device instead of a virtual one, follow the steps below:

  1. Review Android's documentation for connecting your physical device and configuring it for debugging/testing.
  2. Execute adb devices from the command line for list of connected devices, including virtual ones. The first column of each row that doesn't begin with 'emulator' is a device ID for a physical Android device connected to your computer, e.g. 0a388e93. Copy the device ID for the physical device you wish to test against. For more information, review Android's documentation.
  3. Set physicalDeviceId to the device ID you copied in the previous step.
  4. Set virtualDeviceName to null.

iOS

continuum.json

The continuum.json file defines all Continuum-specific configuration for this project.

  1. Set platformName to iOS.
  2. Set pathToAppFile to the absolute path of the APP or IPA file you'd like to test.
  3. Set automationName to XCUITest.

Virtual Device Testing

  1. Find the name of the virtual device you'd like to test against by executing instruments -s devices from a terminal window. This will show a list of all connected devices, including virtual ones. Each row that ends with '(Simulator)' represents a virtual device that's available to use. Copy the human-readable name for the virtual device you wish to test against, e.g. iPhone 8, and note the iOS version it uses, e.g. 13.2.2.
  2. Set deviceName to the human-readable name of the iOS virtual device you copied in the previous step.
  3. Set platformVersion to the version of iOS installed on the virtual device you'd like to use, referenced in step #1. Only use up to the minor version of the version number here, e.g. 13.2 and not 13.2.2, if applicable.
  4. Set physicalDeviceId to null.

You may encounter a dialog like "Do you want the application 'WebDriverAgentRunner-Runner.app' to accept incoming network connections?" when running this sample project while targeting a virtual device. If you do, click 'Approve' when the dialog appears. You can (temporarily) disable your Mac's local firewall to make the message go away completely.

Physical Device Testing

If you'd like to test against a physical iOS device instead of a virtual one, follow the steps below:

  1. Find the ID of the physical device you'd like to test against by executing instruments -s devices from a terminal window. This will show a list of all connected devices, including virtual ones. The unique identifier enclosed in brackets for each row that doesn't end with '(Simulator)' is a device ID for a physical device known by your computer. Copy the device ID for the physical device you wish to test against, e.g. 195c3654909de8b0fe0123e81038b064a492527e.
  2. Set physicalDeviceId to the device ID you copied in the previous step.
  3. Set xcodeOrgId to your Team ID, a unique 10-character string generated by Apple that is assigned to your team. You can find your Team ID using your developer account. Sign in to developer.apple.com/account, and click Membership in the sidebar. Your Team ID appears in the Membership Information section under the team name. You can also find your team ID listed under the "Organizational Unit" field in your iPhone Developer certificate in your keychain. If you have access to the source code of the app you're testing, you may also find it by opening up the project in Xcode and navigating to Build Settings > Signing > Code Signing Identity.
  4. Set xcodeSigningId to your signing ID. This will probably just be iPhone Developer.

If you encounter any issues, please refer to Appium's documentation on this subject.

Hybrid Apps

By default, Continuum is configured to scan the contents of any visible web views in your app for accessibility concerns. If you'd like to disable this functionality, you can do so by setting scanWebViews in continuum.json to false.

Authoring Test Scripts

Test scripts accepted by this executor are authored in XML using a schema defined by the 'schema.xsd' file included with this test script executor in the same directory as the README file. Include the following attributes on the root tests element described in the next section below for some type-ahead and validation features in your XML editor of choice:

<tests
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:noNamespaceSchemaLocation="file:///path/to/schema.xsd"
>
    <!-- test elements go here -->
</tests>

Specifying xsi:noNamespaceSchemaLocation as a relative or absolute path to the 'schema.xsd' provided with this test script executor is highly recommended when authoring test scripts; many XML editors can use this schema to help you author test scripts faster.

Below we go into great detail about each of the different elements supported by this schema.

Containers

Containers help keep things organized. The outermost (required) element tests is a container where global stuff is defined, like what web browser to use and what URL to navigate to in the case of web testing. Inside of a tests element should be at least one test element, where test elements define the start and end of a given test against the website or app in test. test elements can themselves optionally contain step elements to further break up a given test into more manageable blocks for readability purposes.

You can name any container using the name attribute. This name will be printed to the console as the script is executing for your reference.

<tests url="https://www.google.com" browser="chrome">
    <test name="First Test">
        <!-- instructions can go here -->
        <step name="First Test Step">
            <!-- instructions can go here -->
        </step>
        <!-- instructions can go here -->
    </test>
</tests>

If you'd like to record video of one of your tests running, you can specify a videoFilePath attribute on the applicable test container. videoFilePath should specify the relative or absolute path of where you'd like the video saved, and it should end in ".mp4":

<test videoFilePath="example-test-run.mp4">
    <!-- instructions can go here -->
</test>

This will work even if you're running your tests headlessly, or if your browser/emulator window isn't in focus.

Note that enabling video recording may slow down test execution, and after your test has been executed, the video must be rendered, which can take some time. Execution of the rest of your test script will be paused while this rendering occurs, and will only resume once rendering is complete. Exactly how long video rendering takes largely depends on how long your test took to execute as well as how beefy your computer is, specifically its CPU and memory.

Properties

Containers can have at most one properties element that itself has property elements. You can think of each property element like declaring and defining a scoped variable: its name is what you'll use later to retrieve its value. You can reference a property in any attribute value of any element that's in scope using the ${} syntax. Properties defined in a container are available to all its nested containers as well, even if that container has its own property element and properties defined. This means properties can themselves use properties defined previously by ancestor containers.

Here's a complete example of nested containers and nested properties:

<tests url="https://www.google.com" browser="chrome">
    <properties>
        <firstTestName>First Test</firstTestName>
        <unusedProperty>this property isn't used anywhere</unusedProperty>
    </properties>
    <!-- 'firstTestName' and 'unusedProperty' are now available to all subsequent elements in this `tests` container as ${firstTestName} and ${unusedProperty}, respectively -->
    <test name="${firstTestName}">
        <properties>
            <!-- Note the property 'firstTestStepName' below is itself using the value of the property 'firstTestStepName' defined earlier -->
            <firstTestStepName>${firstTestName} Step</firstTestStepName>
        </properties>
        <!-- 'firstTestStepName' is now available to all subsequent elements in this `test` container as ${firstTestStepName} -->
        <step name="${firstTestStepName}">
            <properties>
                <!-- Note the property 'firstTestName' below, defined earlier at the beginning of the `tests` container, is effectively redefined here -->
                <firstTestName>First Test, But Cooler</firstTestName>
            </properties>
            <!-- 'firstTestName' is now available (with its new value!) to all subsequent elements in this `step` container as ${firstTestName} -->
        </step>
    </test>
</tests>

Note that environment variables are also supported and use the same ${} syntax, e.g. ${ANDROID_HOME}; you do not need to define a property inside your test scripts to use them. Property names take priority, but if a given reference name wrapped in ${} doesn't match a property in scope and does match an environment variable, the environment variable's value will be used. Use this feature to have your test scripts behave differently depending on what environment they're run on or for. For example, you can have prerequisite environment variables that you specify in your test scripts for emails and passwords so that you don't need to include plaintext emails and passwords in those test scripts, which is more secure.

Functions

Containers can have functions, which define a set of instructions to be executed later. Unlike other containers, function elements require a name attribute; the value for the name attribute is what's used to subsequently invoke the function when you want to execute the instructions it contains. Function names must be unique within scope, and you cannot redefine them once they've been defined before within the same scope.

<tests url="https://www.google.com" browser="chrome">
    <!-- we define our first function named 'myFunction' below -->
    <function name="myFunction">
        <!-- functions can have at most one set of properties, just like other containers -->
        <properties>
            <unusedProperty>this property isn't used anywhere</unusedProperty>
        </properties>
        <!-- instructions to be executed whenever 'myFunction' is invoked go here -->
    </function>
    <test name="${firstTestName}">
        <!-- we invoke our 'myFunction' function below, which will execute all of its instructions sequentially -->
        <myFunction />
        <!-- you can define as many functions as you want as long as the name is unique within scope -->
        <function name="myOtherFunction">
            <!-- instructions to be executed when 'myOtherFunction' is invoked go here -->
        </function>
        <myOtherFunction />
    </test>
</tests>

Note that nested functions are not yet supported, i.e. function definitions inside of other function definitions, but you can execute a function from inside another function's definition:

<tests url="https://www.google.com" browser="chrome">
    <function name="myFunction">
        <!-- instructions to be executed whenever 'myFunction' is invoked go here -->
    </function>
    <test>
        <function name="myOtherFunction">
            <myFunction />
        </function>
        <myOtherFunction />
    </test>
</tests>

Instructions

Instructions do the actual heavy lifting. Inside of test and step elements are instruction elements: type, click, etc. These elements accept various parameters to specify the thing to be interacted with. For example, to type the text "Level Access" into an HTML element on a page with an ID of 'searchbox', you might use the following instruction:

<type id="searchbox" text="Level Access" />

In the above example, the attribute id is used to specify the element to be interacted with, but instruction elements also accept other attributes that may be more useful given different contexts. Here's a complete list, just be sure to only specify exactly one for each instruction:

  • id
    • Web: the element's 'id' attribute
    • iOS: the element's 'name' attribute in Xcode and Appium
    • Android: the element's 'android:id' attribute in Android Studio; it's 'resource-id' attribute in Appium
  • className
    • Web: the element's 'class' attribute
      • compound class names, e.g. "class-name-1 class-name-2", are supported, but will only return exact string matches; if class order doesn't matter for your use case, use the css attribute instead of className, e.g. css=".class-name-1.class-name-2"
    • iOS: the full name of the XCUI element (e.g. "XCUIElementTypeButton"), which always begins with "XCUIElementType"
    • Android: the full name of the UIAutomator2 class (e.g. "android.widget.TextView")
  • css (not applicable to native mobile content)
  • xpath
  • visibleText
  • visibleTextStartingWith
  • visibleTextContaining

You can also prefix all the above attribute names with any of the following keywords to create new attributes:

  • above
  • below
  • toLeftOf
  • toRightOf
  • onTopOf (not applicable to native mobile content)
  • behind (not applicable to native mobile content)
  • closestTo (matches at most one element)
  • furthestFrom (matches at most one element)

Attributes that use any of the above prefixes are called hints, and they help narrow down exactly which element to interact with. These can be particularly useful when dealing with inconsistent element attribute values, e.g. dynamic page IDs, on the page you're trying to test, or if you don't have much visibility into the source code of the page, e.g. for mobile apps. You can use as many hints as you like, but they must be used with exactly one attribute with a base attribute name, e.g. css.

Here's an example of one hint belowVisibleText being used:

<type css="input" belowVisibleText="Username" text="my@email.com" />

Note that if a given hint isn't found on the page, that's okay; your interaction (and thus test) will not fail because of it, the hint will simply be skipped.

Here's a list of basic instructions available to you, all of which take in the same attribute options discussed above for specifying the element you want to interact with:

  • tap/touch/click (all aliases of each other)
  • type/input (both aliases of each other)
  • hoverOver

All of these instructions will automatically scroll to the element specified prior to performing the rest of their functionality. If you only wish to scroll to a given element without actually interacting with it, a scrollTo instruction is also available. Also, all these instructions will, by default, wait up to 10 seconds for the specified element to be present on the page before scrolling to them. For more details on this waiting functionality, check out the next section below.

Waiting

By default, instructions that specify an element on the page wait up to 10 seconds for the element to appear before doing anything. To change this default timeout of 10 seconds, you can include a timeout attribute on the instruction:

<click id="submit-button" timeout="10000" />

The above example sets the timeout for this click instruction to 10 seconds (timeouts are defined in milliseconds) such that if the 'submit-button' element to be clicked isn't present on the page after 10 seconds, the instruction (and thus the test) will fail. Default timeouts can also be changed for all instructions in a given step, test, or tests using the defaultTimeout attribute:

<tests defaultTimeout="10000">
    <test defaultTimeout="5000">
        <step defaultTimeout="3000">
            <!-- instructions here will have a default timeout of 3 seconds -->
        </step>
        <!-- instructions here will have a default timeout of 5 seconds -->
    </test>
    <test>
        <!-- instructions here will have a default timeout of 10 seconds -->
    </test>
</tests>

Per the above example, timeouts for a given instruction are evaluated in the following whichever-comes-first order: 1. timeout attribute on the instruction 2. defaultTimeout attribute on the step containing the instruction 3. defaultTimeout attribute on the test containing the instruction 4. defaultTimeout attribute on the tests container containing the instruction

If you'd just like to wait for the presence of an element without interacting with it, you can do this with the wait instruction:

<wait id="submit-button" timeout="10000" />

And if you'd just like to wait for a fixed amount of time, you can also do that with the wait instruction, just don't specify an element:

<wait timeout="1000" />

Another method of waiting that might be helpful is to use the waitForInput instruction:

<waitForInput />

Once this instruction is run, test execution is paused, and it won't resume until you press a key in the console. This can be particularly useful if there are complex interactions with the page that you may want to perform manually during a given test run. An example is if you want to perform manual accessibility testing against the current page to supplement automatic test results returned by the testForAccessibility instruction.

Taking Screenshots

Sometimes you may want to take a screenshot of the current state of the page or even just a particular element on the page for later review. You can do this using the screenshot instruction:

<screenshot filePath="/Users/jpizzurro/Desktop/screenshot.jpg" />

filePath is a required attribute, and should be a file path (including file name and .jpg file extension) to where you would like the resulting screenshot saved. If a relative file path is specified as opposed to an absolute one, the screenshot will be saved in the same directory as the README file.

Here's an example that specifies an element, which will result in only that element being screenshotted (it also uses a relative file path):

<screenshot css="header" filePath="header_screenshot.jpg" />

Conditionals

Conditionals are both containers and instructions in that they break up test execution into blocks like containers do, but they also take in selection attributes that are used to define whether or not the elements contained by the conditional are actually executed. If an if conditional's selection attribute defines an element that does not exist, the elements contained by the if conditional, i.e. the instructions that are children of the conditional, are skipped entirely. Here's an example:

<if visibleText="Try Again">
    <click visibleText="OK" />
</if>

In the example above, the click instruction will only be executed if there is visible text on the page of "Try Again", otherwise the click won't happen.

You can invert this logic by adding a not attribute to the if element, like this:

<if not="" visibleText="Try Again">
    <click visibleText="Submit" />
</if>

In the example above, the click instruction will only be executed if there is not visible text on the page of "Try Again", otherwise the click won't happen.

As of right now, the if element is the only conditional supported; elseif and else are not supported. Also note that nested conditionals are not yet supported.

Special Instructions

These instructions don't accept the element selection attributes that other instructions do, but are nevertheless useful.

Navigating to Another URL

The tests container defines what URL to start at, but if you want to go to a different URL at some point in your test, you can use the goTo instruction and its url attribute:

<goTo url="https://www.google.com/" />

Pressing Keys

press is an instruction that accepts a single attribute key which you can use to perform single key presses:

<press key="PAGE_DOWN" />

Here's a complete list of supported key codes (beyond just single characters) that you can use in the key attribute:

  • PLUS
  • MINUS
  • NULL
  • CANCEL
  • HELP
  • BACK_SPACE
  • TAB
  • CLEAR
  • RETURN
  • ENTER
  • SHIFT
  • LEFT_SHIFT
  • CONTROL
  • LEFT_CONTROL
  • ALT
  • LEFT_ALT
  • PAUSE
  • ESCAPE
  • SPACE
  • PAGE_UP
  • PAGE_DOWN
  • END
  • HOME
  • LEFT
  • ARROW_LEFT
  • UP
  • ARROW_UP
  • RIGHT
  • ARROW_RIGHT
  • DOWN
  • ARROW_DOWN
  • INSERT
  • DELETE
  • SEMICOLON
  • EQUALS
  • NUMPAD0
  • NUMPAD1
  • NUMPAD2
  • NUMPAD3
  • NUMPAD4
  • NUMPAD5
  • NUMPAD6
  • NUMPAD7
  • NUMPAD8
  • NUMPAD9
  • MULTIPLY
  • ADD
  • SEPARATOR
  • SUBTRACT
  • DECIMAL
  • DIVIDE
  • F1
  • F2
  • F3
  • F4
  • F5
  • F6
  • F7
  • F8
  • F9
  • F10
  • F11
  • F12
  • META
  • COMMAND
  • ZENKAKU_HANKAKU

Switching to Another Window

switchTo is an instruction that can be used to switch between browser windows or tabs. This is useful when a test involves clicking on a link that spawns a new window or tab that you'd then like to interact with, for example. Simply provide a window or tab attribute (they are aliases for each other) with a value that specifies a partial page title or URL of the window or tab you'd like to switch to:

<switchTo window="LevelAccess.com" />
<switchTo tab="/industries" />

The parts of the 'Waiting' section from earlier in this README about instructions applies to switchTo elements too, so if you're encountering timing issues around a page title or URL not changing quickly enough, try adding a timeout attribute.

You can also omit the window attribute to switch back to the main window/tab, i.e. the window/tab that was started automatically when the execution of your tests first began:

<switchTo />

Note that the switchTo instruction is only applicable in web testing contexts, and therefore cannot be used in mobile tests.

Testing for Accessibility

Once you've used instructions to navigate to a particular page and get that page in a particular state you'd like to automatically test for accessibility, you can use the following element to scan the page for any accessibility concerns:

<testForAccessibility />

When the script executor is run in debug mode, this will print all of the test results of the scan as pretty-printed JSON to the console window you used to run your test script. To subsequently send these same test results to AMP, refer to the next section below as this is handled by a different special instruction.

For web testing, you can also optionally specify a CSS selector via the css attribute to only scan the specified part of the page rather than the entire page:

<testForAccessibility css="#header" />

Similarly, for mobile testing, you can optionally specify an Appium XPath expression via the xpath attribute:

<testForAccessibility xpath="//android.widget.Button[@resource-id='com.levelaccess.exampleandroidapp:id/button']" />

In addition to being able to restrict what's tested for accessibility, you can also specify minimum scoring criteria using any of the following attributes: minSeverity: how severe this accessibility concern is on a scale of 1 to 10, where 10 is the most severe minNoticeability: how noticeable this accessibility concern is on a scale of 1 to 10, where 10 is the most noticeable * minTractability: how tractable this accessibility concern is on a scale of 1 to 10, where 10 is the hardest to resolve

Here's an example that uses multiple attribute options in combination to produce a very customized result set:

<testForAccessibility css="#header" minSeverity="8" minNoticeability="6" minTractability="7" />

If you would like to perform manual accessibility testing on a given page instead of or to supplement the automatic test results returned above, you can use the waitForInput instruction mentioned earlier in the 'Waiting' section of this README.

Submitting Test Results to AMP

Once you've successfully tested a page using the testForAccessibility instruction, you can submit the last set of test results to AMP using a submitTestResultsToAMP instruction:

<submitTestResultsToAMP
    organizationId="10073"
    assetId="36303"
    reportName="James Developer Workstation"
    moduleName="Google Search"
    moduleLocation="https://www.google.com"
    reportManagementStrategy="APPEND"
    moduleManagementStrategy="OVERWRITE"
/>

Notice that the organization, asset, report, and module you'd like to submit test results to are all defined here. You can also specify report and module IDs instead of names using reportId instead of reportName and moduleId instead of moduleName, respectively. In fact, if you're doing web testing, you can set moduleName and moduleLocation to "${currentPageTitle}" and "${currentPageUrl}" to have them be set to the current page title and current page URL, respectively. Report and module management strategies are also defined here—check out the 'AMPReportingService Class' section of this support doc to learn more about what these are and which ones are most applicable to your use case. You'll need to specify all this information at least once, but it's stateful such that any subsequent submitTestResultsToAMP instructions to submit test results will reuse whatever you last specified by default. For example, to submit additional test results to the same module in AMP after the first example above, we can be much less verbose:

<submitTestResultsToAMP
    moduleManagementStrategy="APPEND"
/>

The above reuses the same organization, asset, report, module, and report management strategy we used in our last example, assuming this instruction comes after the first one. We just change the module management strategy here to append test results to the module instead of overwrite any existing ones. Just remember to define everything at least once in a previous submitTestResultsToAMP instruction, and be mindful that unless you specify a new organization/asset/report/module/management strategy in a given submitTestResultsToAMP instruction, the previous one will be reused.

In addition to the above, make sure that you've specified the appropriate AMP instance, e.g. "https://amp.levelaccess.net", in your continuum.json file for ampInstanceUrl. You'll also need to specify an AMP API token for ampApiToken in continuum.json. Refer to the 'API Token Key' section of this support doc to learn how to generate this token from your AMP instance.

Known Issues

  • iOS and Android: Test scripts cannot in themselves define a mobile app to test the way you can for web testing using the url and browser attributes of the tests container element. Instead, this information needs to be defined in the continuum.json file of this project.
  • Iframes are not supported, both with regards to interacting with elements within iframes using script instructions as well as scanning content inside iframes for accessibility concerns. This includes web content rendered in web views in mobile apps.

Crawling

If you'd rather crawl and test your website for accessibility without having to write explicit instructions to navigate it, you can try using the crawl instruction, such as in the following example:

<crawl
    organizationId="10073"
    assetId="36303"
    reportName="My First Crawl"
    reportManagementStrategy="OVERWRITE"
    moduleManagementStrategy="OVERWRITE"
    maxPageDepth="3"
    maxPageCount="5000"
    browserTimeout="60000"
    includeIframeContent="true"
    scope="DOMAIN"
/>

Notice this instruction shares many of the same attributes as the aforementioned submitTestResultsToAMP instruction, so review the documentation for that instruction above to use this one. For example, the optional minSeverity, minNoticeability, and minTractability attributes of the submitTestResultsToAMP instruction can be used in this crawl instruction as well. That's because all the test results of the crawl will be automatically submitted to the AMP report you specify, with separate modules in that report for each page that's crawled.

You can specify maxPageDepth and browserTimeout attributes to control how many pages deep you'd like your site to be crawled, and the maximum amount of seconds you'd like to wait for each page to load before testing it, respectively. Larger values for these two attributes may result in more complete crawls at the expense of taking longer to finish, while smaller values may result in less complete crawls but finish more quickly, so we recommend starting with the defaults specified in the example above and working from there.

By default, the content of any iframes are tested for accessibility and any test results from any of those iframes are included in the module of the page they appeared on. You can disable this functionality for your crawl using the includeIframeContent attribute.

Finally, the scope attribute can be used to determine which pages actually get crawled and tested. By default, this is set to DOMAIN, which will result in all URLs that match your URL's domain (e.g. "google.com") being included. Other options include HOST, which can be used to restrict crawling by subdomain (e.g. only URLs with a hostname of "developers.google.com"), and PATH, which can be used to restrict crawling by partial href (e.g. only URLs whose hostname + pathname start with "developers.google.com/events").

Note that the crawling of mobile apps is not supported by this instruction at this time, and that crawling works best when used on traditional websites with multiple pages; success with single-page applications (SPAs) may vary without additional configuration.

Examples

For Web

Here's a simple test script that navigates to www.google.com in Google Chrome, types "Level Access" into the search bar on that page, clicks the button to initiate the search, then scans the resulting page of search results for accessibility concerns:

<tests url="https://www.google.com" browser="chrome" browserWidth="1366" browserHeight="768" headless="false">
    <test>
        <step>
            <type css="input[title='Search']" text="Level Access" />
            <click visibleText="Google Search" />
            <testForAccessibility />
        </step>
    </test>
</tests>

Execute the example script above with debug mode enabled to see test results printed to the console.

For Mobile

Here's a simple test script that installs and navigates to our Continuum Android sample app in an emulator (assuming continuum.json has been configured accordingly), types "Level Access" into one of the native text boxes on the first screen of that sample app, then scans the currently viewable area of the app for accessibility concerns:

<tests>
    <test>
        <step>
            <type id="editText" text="Level Access" />
            <testForAccessibility />
        </step>
    </test>
</tests>

Execute the example script above with debug mode enabled to see test results printed to the console.

Installation

Add the following to your user's .npmrc file (e.g. ~/.npmrc on macOS), creating it if it doesn't already exist:

@continuum:registry=https://npm.levelaccess.net/continuum/
//npm.levelaccess.net/continuum/:_authToken=TOKEN

Replace TOKEN in the snippet above with the entitlement token you were provided by Level Access. Do not share this token with anyone who is not covered by your Continuum license; you are responsible for any activity using your account's token. If you do not have a token, please contact support@levelaccess.com.

Once your .npmrc file is squared away, execute the following to install the project globally using npm:

npm i -g @continuum/continuum-script-executor

Once complete, this will allow you to execute continuum-script-executor from any directory in any terminal window. You may need to restart any existing terminal windows before you can start using it. See the next section below for usage instructions.

Usage

continuum-script-executor <relative or absolute path to a Continuum test script XML file>

This will automatically install the appropriate JRE for the executor to function, if necessary, then run the specified test script. It will also check for updates to this project and notify you if an update is available after your test script has been executed.

Make sure you've got a valid continuum.json file in the src/main/resources directory from wherever you choose to execute continuum-script-executor. A sample continuum.json file can be found in the node_modules/@continuum/continuum-script-executor/src/main/resources directory of your npm installation. See the 'Setup' section of this README for more information.

For complete usage instructions, simply execute continuum-script-executor.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request