How to deploy to JBoss EAP 7 with Gradle, Cargo, and Jenkins

It took me quite a while to get my Java EE 7 application automatically deployed to a target JBoss EAP 7 server from within Jenkins using Gradle as the build tool and Cargo for managing the deployment. So here’s my final solution for you to use! 😉

build.gradle

dependencies {
    classpath 'com.bmuschko:gradle-cargo-plugin:2.2.3'
}

apply plugin: 'com.bmuschko.cargo'

dependencies {
    cargo "org.codehaus.cargo:cargo-core-uberjar:1.5.0",
          "org.codehaus.cargo:cargo-ant:1.5.0",
          "org.wildfly:wildfly-controller-client:8.2.1.Final"
}

cargo {
    containerId = 'wildfly10x'

    deployable {
        context = 'MyContext'
    }

    remote {
        hostname = '10.1.1.1'
        username = 'remote'
        password = 'remote'

        containerProperties {
            property 'cargo.jboss.management-http.port', 9990
        }
    }
}

Jenkinsfile

stage("Deployment") {
    bat('gradlew cargoRedeployRemote --stacktrace')
}

Deploying to JBoss EAP 7 is the same as deploying to Wildfly 10

First of all, there’s no Cargo containerId for JBoss EAP 7. However, you can use Wildfly 10 instead, as you can read here: Codehaus Cargo – WildFly 10.x:

The WildFly 10.x container can be used with the JBoss Enterprise Application Platform (EAP) version 7; i.e. the version released in May 2016

Finding the right versions of Cargo and its Gradle plugin

You need to use the right versions of Cargo and the Cargo Gradle plugin. I’ve found that version 2.2.3 of the Gradle plugin and version 1.5.0 of Cargo itself work fine with Wildfly 10/JBoss EAP 7 (see Execution failed for task ‘:cargoRunLocal’. #152). The latest versions as of this writing of Cargo (1.6.6) and the plugin (2.3) also work in my environment.

If the versions don’t work correctly, you might get an error like this:

> gradle cargoRedeployRemote

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':cargoRedeployRemote'.
> org.codehaus.cargo.container.ContainerException: Cannot create configuration. 
There's no registered configuration for the parameters (container [id = [wildfly10x],
type = [remote]], configuration type [runtime]). Actually there are no valid types
registered for this configuration. Maybe you've made a mistake spelling it?

Deploying to JBoss EAP 7 with the Wildfly Controller Client

Cargo needs a controller client to be able to deploy artifacts to a remote Wildfly 10/JBoss EAP 7 as you can read here: Codehaus Cargo – JBoss Remote Deployer. I’ve found that version 8.2.1.Final of the Wildfly Controller Client org.wildfly:wildfly-controller-client works fine. However, the latest version of org.wildfly.core:wildfly-controller-client (3.0.10.Final) also works.

You need to add it to cargo.dependencies in your build script as shown above. Otherwise you might end up with an error message like this:

> gradle cargoRedeployRemote

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':cargoRedeployRemote'.
> org.codehaus.cargo.container.ContainerException: Failed to create deployer
with implementation class org.codehaus.cargo.container.wildfly.WildFly10xRemoteDeployer
for the parameters (container [id = [wildfly10x]], deployer type [remote]).

Changing the JBoss management port

In my case, the target JBoss server uses a different port for remote management. The default is 9990, but I use 19990. Simply adding cargo.port = 19990 to the build file didn’t cut it:

> org.codehaus.cargo.util.CargoException: HTTP request failed, response code: -1,
response message: java.net.ConnectException: Connection refused: connect, response body: null

And by adding --info to the call to gradle I got:

Starting action 'redeploy' for remote container 'wildfly10x' on 'http://localhost:9990'

It took me a while to find the correct way of telling Cargo to use the custom port. The Cargo documentation (see Codehaus Cargo – JBoss Remote Deployer) states:

WildFly 8.x, 9.x and 10.x use the cargo.jboss.management-http.port port

However, setting this property isn’t as easy as adding cargo.jboss.management-http.port = 19990 to your build file, because this results in:

(cargo.jboss.management - http.port) is a binary expression, but it should be a variable expression

And adding the following lines…

cargo {
    jboss {
        management-http {
            port = 19990
        }
    }
}

…leads to a different error:

> Could not find method jboss() for arguments [cargo_61gwz9gjyqje40dvlr47klkas$_run_closure3$_closure6@22ff8f9]
on object of type com.bmuschko.gradle.cargo.convention.CargoPluginExtension.

Finally, I’ve found the right way of setting the property in this article: Local redeployment #123

containerProperties {
    property 'cargo.jboss.management-http.port', 19990
}

However, if you use the newest versions of Cargo and the plugin cargo.port = 19990 seems to work again.

Example build.gradle using the latest versions of Cargo and the plugin

dependencies {
    classpath 'com.bmuschko:gradle-cargo-plugin:2.3'
}

apply plugin: 'com.bmuschko.cargo'

dependencies {
    cargo "org.codehaus.cargo:cargo-core-uberjar:1.5.0",
          "org.codehaus.cargo:cargo-ant:1.5.0",
          "org.wildfly.core:wildfly-controller-client:3.0.10.Final"
}

cargo {
    containerId = 'wildfly10x'
    port = 9990

    deployable {
        context = 'MyContext'
    }

    remote {
        hostname = '10.1.1.1'
        username = 'remote'
        password = 'remote'
    }
}

Exception-like error handling in Software AG’s Natural

Error handling in Software AG’s Natural can be done in a way that resembles Exception handling in object-oriented languages like Java.

throw

Instead of throwing an Exception, you raise an error simply by assigning a value to the system variable *ERROR-NR. As soon as a statement like the following is executed, the current program flow is interrupted and the nearest ON ERROR block is executed.

*ERROR-NR := 1234

In fact, we use exactly this feature for raising assertion errors in NatUnit.

catch

You can handle a Natural error in an ON ERROR block anywhere inside your code. Just like an Exception travels up through the call stack to get caught in the nearest try-catch block, a Natural error is handled in the nearest ON ERROR block.

Here’s an example of an ON ERROR block that exits the current module and marks the error as handled:

ON ERROR
    /* do something about it */
    ESCAPE MODULE
END-ERROR

catch (SpecificException e)

You can only define a single ON ERROR block in each Natural module. So if you need to handle specific errors in a different way, you need to have some kind of distinction logic like this:

ON ERROR
    IF *ERROR-NR EQ 1234
        /* do something about it */
        ESCAPE MODULE
    END-IF
END-ERROR

Or if you need to distinguish between multiple errors:

ON ERROR
    DECIDE ON FIRST VALUE OF *ERROR-NR
        VALUE 1234
            /* do something about it */
            ESCAPE MODULE
        VALUE 1235
            /* do something about it */
            ESCAPE MODULE
        NONE IGNORE
    END-DECIDE
END-ERROR

re-throw

If you can’t handle the error in an ON ERROR block, but you want to log it or do something else with it before letting the next ON ERROR block handle it, you don’t need to do anything at all, because that’s the default behaviour.

However, if you exit the ON ERROR block with any statement from the following list, the error is marked as handled and the normal control flow (in the calling module of the module containing the ON ERROR block) is continued. So be sure not to exit the block with any of these statements.

Exiting from an ON ERROR Block:
An ON ERROR block may be exited by using a FETCH, STOP, TERMINATE, RETRY, ESCAPE ROUTINE or ESCAPE MODULE statement. If the block is not exited using one of these statements, standard error message processing is performed and program execution is terminated.

Here’s an example of such a “re-throw”:

ON ERROR
    IF *ERROR-NR EQ 1234
        /* log the error */
        /* DON'T exit with `FETCH`, `STOP`, `TERMINATE`, `RETRY`, `ESCAPE ROUTINE` or `ESCAPE MODULE` */
    END-IF
END-ERROR

Checking which error occurred

Even if you “handle” the Natural error in an ON ERROR block, e.g. by using ESCAPE MODULE, the system variable *ERROR-NR isn’t reset to 0. You need to do that yourself, if you need to. If you don’t, the variable can be used in the calling module to check whether an error (that was handled) occured. By the way, the system variable *ERROR-LINE contains the line number of the statement that raised the error.

CALLER

CALLNAT 'CALLEE'
IF *ERROR-NR NE 0
    WRITE 'Error' *ERROR-NR 'occurred in line' *ERROR-LINE 'while calling CALLEE'
    /* prints: "Error     1234 occurred while calling CALLEE" */
END-IF
END

CALLEE

*ERROR-NR := 1234
ON ERROR
    ESCAPE MODULE
END-ERROR
END

If you don’t want any caller to know that an error occurred, simply reset *ERROR-NR:

ON ERROR
    RESET *ERROR-NR
    ESCAPE MODULE
END-ERROR

Global error handler (like a try-catch in main())

You can define a global error handler by setting the system variable *ERROR-TA to the name of a Natural module. In case of an error, Natural automatically calls this module (which has to be a program) and puts information about the error on the stack. The system variables *ERROR-NR and *ERROR-LINE will be reset at this point, so the error handler has to read the information from the stack with INPUT.

CALLER

*ERROR-TA := 'HANDLER'
CALLNAT 'CALLEE'
END

CALLEE

*ERROR-NR := 1234
END

HANDLER

DEFINE DATA LOCAL
1 #ERROR-NR           (N5)
1 #LINE               (N4)
1 #STATUS-CODE        (A1)
1 #PROGRAM            (A8)
1 #LEVEL              (N2)
1 #LEVELI4            (I4)
1 #POSITION-IN-LINE   (N3)
1 #LENGTH-OF-ITEM     (N3)
END-DEFINE

/* read error information from stack */
INPUT #ERROR-NR #LINE #STATUS-CODE #PROGRAM #LEVEL #LEVELI4

WRITE #ERROR-NR #LINE #STATUS-CODE #PROGRAM #LEVEL #LEVELI4 #STATUS-CODE
WRITE *ERROR-NR *ERROR-LINE

END

Output:

1234    10 O CALLEE     2           0 O
     0     0

For more information about the error information on the stack take a look at the section Using an Error Transaction Program in the Natural documentation.

Getting more information about errors programmatically

If you need to find out more about the current error, e.g. in your ON ERROR block, there are quite a few User Exits that deal with errors:

  • USR0040N: Get type of last error
  • USR1016N: Get error level for error in nested copycodes
  • USR2001N: Get information on last error
  • USR2006N: Get information from error message collector
  • USR2007N: Get or set data for RPC default server
  • USR2010N: Get error information on last database call
  • USR2026N: Get TECH information
  • USR2030N: Get dynamic error message parts from the last error
  • USR3320N: Find user short error message (including steplibs search)
  • USR4214N: Get program level information

Automatically reconnect to a database after a failure in JBoss EAP 7

My JBoss EAP 7 server couldn’t cope with a failing database. After a database restart or failure, e.g. due to maintenance, it simply would not connect to the database again automatically. The application simply stopped working as soon as the database was unavailable for a short period of time.

JBoss’s server.log was full of (not very helpful) error messages like this:

2017-03-01 12:05:18,175 ERROR [it.macke.repository.UserRepository] (default task-17) Error reading user: org.hibernate.exception.JDBCConnectionException: could not prepare statement: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: could not prepare statement
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1692)
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1602)
...
Caused by: org.hibernate.exception.JDBCConnectionException: could not prepare statement
    at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:48)
    at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
...
Caused by: java.sql.SQLNonTransientConnectionException: Connection is close
    at org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:123)

To make the application work again, I had to restart JBoss and every now and then even the whole Windows server on which it runs.

As it turns out, this is a common problem with JBoss. However, the solution is quite easy. You only need to configure JBoss to validate the database connections. Here’s how this would look in standalone.xml (take a look at the content of element validation):

<datasource jndi-name="java:jboss/jdbc/MyDS" pool-name="MyDS">
    <connection-url>jdbc:mariadb://localhost:3306/login</connection-url>
    <driver-class>org.mariadb.jdbc.Driver</driver-class>
    <driver>mariadb</driver>
    <security>
        <user-name>user</user-name>
        <password>secret</password>
    </security>
    <validation>
        <check-valid-connection-sql>select 1</check-valid-connection-sql>
        <validate-on-match>true</validate-on-match>
        <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/>
    </validation>
</datasource>

And if you want to script the setting, here’s the code for the CLI:

/subsystem=datasources/data-source=MyDS:write-attribute(name=validate-on-match,value=true)
/subsystem=datasources/data-source=MyDS:write-attribute(name=check-valid-connection-sql,value="select 1")
/subsystem=datasources/data-source=MyDS:write-attribute(name=exception-sorter-class-name,value="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter")

The above settings make JBoss validate the connection on every new request (validate-on-match). But you can also have it do the checking in the background, e.g. every few seconds. And you need to consider the correct setting for your database. I use MariaDB/MySQL for my application and the exception-sorter and check-valid-connection-sql might be different depending on your database (e.g. org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker and select 1 from dual for Oracle). Take a look at the available parameters here: Database Connection Validation JBoss EAP 7 – Configuration Guide

How to find the physical file path of the current FUSER of a Natural runtime

Here’s a short subroutine for reading the physical file path of the current FUSER of a Natural (from Software AG) runtime. I’m not sure if it works on a mainframe, but it definitely runs on a Linux system.

The subroutine returns the following information, if it runs successfully:

P-FUSER-PATH /home/macke/fuser
P-RC 0

Otherwise the return code P-RC will have a value other than zero.

It uses two user exits:

  • USR6006N: Get path to system file
  • USR2013N: Get SYSPROF information

USR2013N reads the information about the current FUSER and returns its DB-ID and File Number. And USR6006L takes these two inputs and returns the physical file path of the FUSER.

Subroutine GET-CURRENT-FUSER-PATH

**************************************************************************
*
*  File: GET-CURRENT-FUSER-PATH (VNGFUPAT)
*
*  Reads the physical file path for the current FUSER.
*
*  Tags: FUSER, UserExit
*
*  Parameters:
*    -
*
*  Returns:
*    P-FUSER-PATH - File path for the current FUSER.
*    P-RC - Return code
*
**************************************************************************
DEFINE DATA
*
PARAMETER
*
01 P-RC (I4) BY VALUE RESULT
01 P-FUSER-PATH (A) DYNAMIC BY VALUE RESULT
*
LOCAL
*
* Get path to system file
01 USR6006L
  02 INPUTS
    03 SYSF-DBID (I4)
    03 SYSF-FNR (I4)
  02 OUTPUTS
    03 SYSF-PATH (A253)
    03 RESPONSE-CODE (I4)
    03 INFOTEXT (A65)
01 EXTENSIONS (A1/1:1)
*
* Get SYSPROF information
01 USR2013L
  02 OUTPUTS
    03 FILENAME (A12/1:50)
    03 DBID (P5/1:50)
    03 FNR (P5/1:50)
    03 DBNAME (A11/1:50)
    03 AMOUNT (P4)
*
01 #INDEX (I4)
*
01 #FUSER-DBID (N8)
01 #FUSER-FNR (N8)
*
END-DEFINE
*
DEFINE SUBROUTINE GET-CURRENT-FUSER-PATH
*
RESET P-FUSER-PATH P-RC USR6006L USR2013L EXTENSIONS(*) #FUSER-DBID #FUSER-FNR
*
CALLNAT 'USR2013N'  USR2013L
*
FOR #INDEX = 1 TO USR2013L.AMOUNT
  IF USR2013L.FILENAME(#INDEX) EQ 'FUSER'
    #FUSER-DBID := USR2013L.DBID(#INDEX)
    #FUSER-FNR  := USR2013L.FNR(#INDEX)
  END-IF
END-FOR
*
IF #FUSER-DBID EQ 0 OR #FUSER-FNR EQ 0
  P-RC := 1
  ESCAPE MODULE
END-IF
*
USR6006L.SYSF-DBID := #FUSER-DBID
USR6006L.SYSF-FNR  := #FUSER-FNR
*
CALLNAT 'USR6006N' USR6006L EXTENSIONS(*)
*
P-FUSER-PATH := USR6006L.SYSF-PATH
P-RC := USR6006L.RESPONSE-CODE
*
END-SUBROUTINE
*
END

How to combine/join file paths in Gradle/Groovy

One might think that joining (or combining) two file paths together with Groovy would be an easy thing to do. I’m used to “nice” methods from Ruby like File.join (see How to do a safe join pathname in ruby?):

File.join("path", "to", "join")

As it turns out, there is no such method in Groovy. However, here are two easy ways to safely combine paths in Groovy (which I use in my Gradle build):

import java.nio.file.Paths // only needed for second example

def dir1 = "/the/path"
def dir2 = "to/join"

println new File(dir1, dir2)
println Paths.get(dir1, dir2)
// -> both print "\the\path\to\join" (on my Windows machine)

How to use an existing access token to authenticate against the Google API in Ruby

After a few hours of trial and error and finally getting authentication against Google’s API working in Ruby I think it’s time for a blog post 😉

I had a simple (at least I thought it was simple) requirement: Reading the people in a user’s Google+ circles with Ruby. Because I use OmniAuth in my application, the user is already authenticated and I even have his access token stored in the database. However, it took me a few hours to find out how to use this token to access Google’s API.

Reading people from a Google+ user’s circles

Reading the people from a user’s circle is quite easy. Simply use plus.people.list. You can try it with the Google APIs Explorer. However, you need to make sure, your app requests the right permissions. This is the corresponding line from my Rails initializer omniauth.rb (I needed to add the scope plus.login and change the access type to offline):

provider :google_oauth2, "CLIENTID", "SECRET", scope: 'profile,email,plus.login', image_aspect_ratio: 'square', image_size: 48, access_type: 'offline', name: 'google'

The Ruby code using google-api-ruby-client looks like this:

plus = Google::Apis::PlusV1::PlusService.new
friends = plus.list_people("me", "visible").items

The items you get from Google+ look like this:

#<Google::Apis::PlusV1::Person:0x4307fd0 
    @display_name="The Name",
    @etag="\some tag\"",
    @id="1234",
    @image=#<Google::Apis::PlusV1::Person::Image:0x43057f8 
        @url="https://lh6.googleusercontent.com/.../photo.jpg?sz=50">,
    @kind="plus#person",
    @object_type="person",
    @url="https://plus.google.com/1234">

Using an existing access token for authentication

Google’s documentation states that you need to use Signet to authenticate against the Google API. So I thought I’d give it a try and started coding. But as it turns out, Signet is a complex beast and I didn’t want to re-implement the whole OAuth authentication process, because OmniAuth already does that for me. I just wanted to use my existing token for authentication!

Long story short: I found the solution in file http_command.rb in method apply_request_options():

if options.authorization.respond_to?(:apply!)
    options.authorization.apply!(req.header)
elsif options.authorization.is_a?(String)
   req.header[:authorization] = sprintf('Bearer %s', options.authorization)
end

You can simply set the attribute authorization of your PlusService to a string (instead of a Signet object) and it will be set as the Bearer in the HTTP request to Google’s API. So, I simply had to add this line to my calling code and I was done:

plus.authorization = access_token

The final code

I still can’t believe that the final solution is so simple! 🙂

Gemfile:

gem 'google-api-client', '~> 0.9'

omniauth.rb:

provider :google_oauth2, "CLIENTID", "SECRET", scope: 'profile,email,plus.login', image_aspect_ratio: 'square', image_size: 48, access_type: 'offline', name: 'google'

friend_reader.rb:

require 'google/apis/plus_v1'

plus = Google::Apis::PlusV1::PlusService.new
plus.authorization = access_token
friends = plus.list_people("me", "visible").items

How to enable access logging (accesslog) in JBoss EAP 7

Configuring JBoss EAP 7 to write an access log (e.g. like Apache webserver) is quite easy with the CLI:

/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add

If you need any additional configuration, take a look at this: Wildfly 10.0.0.Final Model Reference.

For example, to change the prefix of the log’s file name:

/subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=prefix, value="my_access")

Alternatively, you could change the configuration in the config XML (e.g. standalone.xml):

<subsystem xmlns="urn:jboss:domain:undertow:3.1">
    ...
    <server name="default-server">
        ...
        <host name="default-host" alias="localhost">
            ...
            <access-log prefix="my_access" />
            ...
        </host>
    </server>
    ...
</subsystem>

JBoss will now write every access to the application to accesslog in the log directory (e.g. JBOSS_HOME\standalone\log):

192.168.1.1 - - [23/Jun/2016:13:29:24 +0200] "GET /MyApp/MyPath/MyFile.xhtml HTTP/1.1" 200 5738

How to seed the database with sample data for an Arquillian test

Arquillian makes it easy to test your Java EE application on a real server. However, if you deploy an application, that uses a database to perform its tasks, how do you make sure a database with test entries is available on the target server?

If you set up a “real” database on the server, the tests become hard to understand, because the data that’s used in assertions is not visible directly in the test. You have to know the content of the database to make sense of the tests. In addition, you’ll have to maintain the database and keep the data in a consistent state.

Instead, I would like my tests to create the test data themselves, so that I can see everything I need to know to understand the tests in one place and I don’t have to maintain a database system.

Using EntityManager

My first attempt was to inject an Entity Manager into my Arquillian test and setup the test data like this:

@RunWith(Arquillian.class)
public class IntegrationTest
{
    @PersistenceContext
    private EntityManager em;

    @Resource
    private UserTransaction userTransaction;

    private User user;

    @Deployment
    public static WebArchive createDeployment()
    {
        return ShrinkWrap
                .create(WebArchive.class)
                .addClass(User.class)
                ...
                .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
                .addAsResource("arquillian/persistence.xml", "META-INF/persistence.xml");
    }

    @Before
    public void setup() throws Exception
    {
        user = new User("myuser", "mypassword");
        userTransaction.begin();
        em.persist(user);
        userTransaction.commit();
    }

    @Test
    public void existingUserShouldBeFound()
    {
        assertThat(
                em.find(User.class, "myuser"),
                is(user));
    }
}

The file arquillian/persistence.xml looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="myPU">
        <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
        <properties>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
        </properties>
    </persistence-unit>
</persistence>

It simply uses the default datasource (of JBoss EAP in my case) of the application server. So I don’t even have to setup a new datasource for my tests. Hibernate drops and recreates the tables during each deployment, so the tests start with a fresh database.

This test works great, as long as you run it inside the container. The EntityManager only gets injected into the test, if you deploy it to the application server. However, if you want to test your application from the outside, em will be null.

@Test
@RunAsClient
public void existingUserShouldBeFound()
{
    // some assertions against the API
}

If you add @RunAsClient to the test method or set the deployment to @Deployment(testable = false), the test is run outside the container (e.g. if you want to test a REST API). Dependency injection doesn’t work in this case and the test fails:

java.lang.NullPointerException
    at it.macke.arquillian.api.IntegrationTest.setup(IntegrationTest.java:78)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    ...

Using an initialization observer

If you use Java EE 7 (which I do), there’s an easy way to run code at the start of your application: observers. If you add an observer to your project, that gets called as soon as the application has initialized, you can execute all the needed setup code before the first test is run.

Here’s an example of my observer, that creates the test data in the database after the deployment to the remote server:

@ApplicationScoped
public class TestDataCreator
{
    @PersistenceContext
    private EntityManager em;

    @Transactional
    void createTestUser(
            @Observes @Initialized(ApplicationScoped.class) final Object event)
    {
        User user = new User("myuser", "mypassword");
        em.persist(user);
    }
}

Of course, you need to include the observer in your deployment:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addClass(TestDataCreator.class)
        ...
}

If you add some logging to TestDataCreator, you can see that it creates the test data after the deployment to the remote server:

19:28:27,589 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000227: Running hbm2ddl schema export
19:28:27,592 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000230: Schema export complete
...
19:28:27,916 INFO  [it.macke.arquillian.setup.TestDataCreator] (ServerService Thread Pool -- 143) Adding test user: User{username=myuser, password=mypassword}
...
19:28:28,051 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 143) WFLYUT0021: Registered web context: /2e36a3c3-be16-4e00-86c3-598ce822d286
19:28:28,101 INFO  [org.jboss.as.server] (management-handler-thread - 29) WFLYSRV0010: Deployed "2e36a3c3-be16-4e00-86c3-598ce822d286.war" (runtime-name : "2e36a3c3-be16-4e00-86c3-598ce822d286.war")

Now, my client test runs successfully, because the needed test data gets created as soon as the application is deployed.

How to test a REST API with Arquillian

Testing a REST API on a real application server with Arquillian is easy – if you know what you need to do 😉

I have this simple REST service, that authenticates a user:

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response authenticateUser(final UserData userData)
{
    ...
    return Response
        .status(401)
        .entity(false)
        .build();
}

Let’s write a test for this REST API, that uses a real application server and calls the interface from the outside.

Arquillian dependencies

The REST extensions for Arquillian make it fairly easy to test a deployed web application. Let’s start with their dependency in gradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon (or any other specific container, e.g. JBoss, Wildfly etc.)
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6',
// REST extensions
'org.jboss.arquillian.extension:arquillian-rest-client-impl-jersey:1.0.0.Alpha4'

Arquillian test

I can now write a test against the REST interface on the server like this:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addPackages(true, Filters.exclude(".*Test.*"),
            SessionResource.class.getPackage(),
            ...)
        .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");
}

@Test
@RunAsClient
public void authenticateUser(
    @ArquillianResteasyResource final WebTarget webTarget)
{
    final Response response = webTarget
        .path("/sessions")
        .request(MediaType.APPLICATION_JSON)
        .post(Entity.json(new UserData(
            "myuser",
            "mypassword")));
    assertEquals(true, response.readEntity(Boolean.class));
}

Note the @RunAsClient. This tells Arquillian to run the test as a client against the remote server and not within the remote server. The test class isn’t even deployed to the application server in this case (see Filters.exclude(".*Test.*")). That’s exactly what I needed, because I wanted to test the REST API from the outside, to make sure it’s available to its clients. Alternatively, you could set the whole deployment to @Deployment(testable = false), instead of configuring each test individually.

The REST extensions for Arquillian now call the test method with a WebTarget as a parameter. It points directly to the URL of the deployed web application, e.g. http://1.2.3.4:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. So you don’t have to manually create the target URL everytime.

JBoss server configuration

It took me a while to find out how the target server – JBoss EAP 7 in my case – needs to be configured, to make this test pass, because the initial error message after my first attempt to run the test wasn’t very helpful at all:

javax.ws.rs.ProcessingException: java.net.ConnectException: Connection refused: connect
    at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:287)
    at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:255)
    ...
Caused by: java.net.ConnectException: Connection refused: connect
    at java.net.DualStackPlainSocketImpl.connect0(Native Method)
    at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)

I could see in the server’s logs, that the deployment of the WAR file into JBoss was successful, so it had to be the REST client, that couldn’t connect to the API. A simple System.out.println(webTarget.getUri()); showed the problem: http://0.0.0.0:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. Arquillian uses the IP configuration of the target server and I had configured JBoss to listen on any of its interfaces (hence 0.0.0.0).

To fix this problem I made JBoss listen on the exact public IP address of the server by adding this to \standalone\configuration\standalone.xml. Be sure to use the real IP address and not 0.0.0.0 or <any-address />.

<interfaces>
    <interface name="management">
        <inet-address value="1.2.3.4"/>
    </interface>
    <interface name="public">
        <inet-address value="1.2.3.4"/>
    </interface>
</interfaces>