## Mastering the Generational Change in Software AG’s Natural

Software AG recorded a video with us on how we cope with the generational change that takes place in the Natural developer community.

How do you handle the problem of aging Natural developers? I’d love to hear your comments on that!

By the way, I’ll be talking about modern Natural development at IUG 2017! See you in Salzburg!

## Exception-like error handling in Software AG’s Natural

Error handling in Software AG’s Natural can be done in a way that resembles Exception handling in object-oriented languages like Java.

## throw

Instead of throwing an Exception, you raise an error simply by assigning a value to the system variable *ERROR-NR. As soon as a statement like the following is executed, the current program flow is interrupted and the nearest ON ERROR block is executed.

*ERROR-NR := 1234

In fact, we use exactly this feature for raising assertion errors in NatUnit.

## catch

You can handle a Natural error in an ON ERROR block anywhere inside your code. Just like an Exception travels up through the call stack to get caught in the nearest try-catch block, a Natural error is handled in the nearest ON ERROR block.

Here’s an example of an ON ERROR block that exits the current module and marks the error as handled:

ON ERROR
/* do something about it */
ESCAPE MODULE
END-ERROR


## catch (SpecificException e)

You can only define a single ON ERROR block in each Natural module. So if you need to handle specific errors in a different way, you need to have some kind of distinction logic like this:

ON ERROR
IF *ERROR-NR EQ 1234
/* do something about it */
ESCAPE MODULE
END-IF
END-ERROR


Or if you need to distinguish between multiple errors:

ON ERROR
DECIDE ON FIRST VALUE OF *ERROR-NR
VALUE 1234
/* do something about it */
ESCAPE MODULE
VALUE 1235
/* do something about it */
ESCAPE MODULE
NONE IGNORE
END-DECIDE
END-ERROR


## re-throw

If you can’t handle the error in an ON ERROR block, but you want to log it or do something else with it before letting the next ON ERROR block handle it, you don’t need to do anything at all, because that’s the default behaviour.

However, if you exit the ON ERROR block with any statement from the following list, the error is marked as handled and the normal control flow (in the calling module of the module containing the ON ERROR block) is continued. So be sure not to exit the block with any of these statements.

Exiting from an ON ERROR Block:
An ON ERROR block may be exited by using a FETCH, STOP, TERMINATE, RETRY, ESCAPE ROUTINE or ESCAPE MODULE statement. If the block is not exited using one of these statements, standard error message processing is performed and program execution is terminated.

Here’s an example of such a “re-throw”:

ON ERROR
IF *ERROR-NR EQ 1234
/* log the error */
/* DON'T exit with FETCH, STOP, TERMINATE, RETRY, ESCAPE ROUTINE or ESCAPE MODULE */
END-IF
END-ERROR


## Checking which error occurred

Even if you “handle” the Natural error in an ON ERROR block, e.g. by using ESCAPE MODULE, the system variable *ERROR-NR isn’t reset to 0. You need to do that yourself, if you need to. If you don’t, the variable can be used in the calling module to check whether an error (that was handled) occured. By the way, the system variable *ERROR-LINE contains the line number of the statement that raised the error.

CALLER

CALLNAT 'CALLEE'
IF *ERROR-NR NE 0
WRITE 'Error' *ERROR-NR 'occurred in line' *ERROR-LINE 'while calling CALLEE'
/* prints: "Error     1234 occurred while calling CALLEE" */
END-IF
END


CALLEE

*ERROR-NR := 1234
ON ERROR
ESCAPE MODULE
END-ERROR
END


If you don’t want any caller to know that an error occurred, simply reset *ERROR-NR:

ON ERROR
RESET *ERROR-NR
ESCAPE MODULE
END-ERROR


## Global error handler (like a try-catch in main())

You can define a global error handler by setting the system variable *ERROR-TA to the name of a Natural module. In case of an error, Natural automatically calls this module (which has to be a program) and puts information about the error on the stack. The system variables *ERROR-NR and *ERROR-LINE will be reset at this point, so the error handler has to read the information from the stack with INPUT.

CALLER

*ERROR-TA := 'HANDLER'
CALLNAT 'CALLEE'
END


CALLEE

*ERROR-NR := 1234
END


HANDLER

DEFINE DATA LOCAL
1 #ERROR-NR           (N5)
1 #LINE               (N4)
1 #STATUS-CODE        (A1)
1 #PROGRAM            (A8)
1 #LEVEL              (N2)
1 #LEVELI4            (I4)
1 #POSITION-IN-LINE   (N3)
1 #LENGTH-OF-ITEM     (N3)
END-DEFINE

/* read error information from stack */
INPUT #ERROR-NR #LINE #STATUS-CODE #PROGRAM #LEVEL #LEVELI4

WRITE #ERROR-NR #LINE #STATUS-CODE #PROGRAM #LEVEL #LEVELI4 #STATUS-CODE
WRITE *ERROR-NR *ERROR-LINE

END


Output:

1234    10 O CALLEE     2           0 O
0     0


For more information about the error information on the stack take a look at the section Using an Error Transaction Program in the Natural documentation.

If you need to find out more about the current error, e.g. in your ON ERROR block, there are quite a few User Exits that deal with errors:

• USR0040N: Get type of last error
• USR1016N: Get error level for error in nested copycodes
• USR2001N: Get information on last error
• USR2006N: Get information from error message collector
• USR2007N: Get or set data for RPC default server
• USR2010N: Get error information on last database call
• USR2026N: Get TECH information
• USR2030N: Get dynamic error message parts from the last error
• USR3320N: Find user short error message (including steplibs search)
• USR4214N: Get program level information

## Automatically reconnect to a database after a failure in JBoss EAP 7

My JBoss EAP 7 server couldn’t cope with a failing database. After a database restart or failure, e.g. due to maintenance, it simply would not connect to the database again automatically. The application simply stopped working as soon as the database was unavailable for a short period of time.

JBoss’s server.log was full of (not very helpful) error messages like this:

2017-03-01 12:05:18,175 ERROR [it.macke.repository.UserRepository] (default task-17) Error reading user: org.hibernate.exception.JDBCConnectionException: could not prepare statement: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: could not prepare statement
at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1692)
at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1602)
...
Caused by: org.hibernate.exception.JDBCConnectionException: could not prepare statement
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:48)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
...
Caused by: java.sql.SQLNonTransientConnectionException: Connection is close


To make the application work again, I had to restart JBoss and every now and then even the whole Windows server on which it runs.

As it turns out, this is a common problem with JBoss. However, the solution is quite easy. You only need to configure JBoss to validate the database connections. Here’s how this would look in standalone.xml (take a look at the content of element validation):

<datasource jndi-name="java:jboss/jdbc/MyDS" pool-name="MyDS">
<security>
<user-name>user</user-name>
</security>
<validation>
<check-valid-connection-sql>select 1</check-valid-connection-sql>
<validate-on-match>true</validate-on-match>
</validation>
</datasource>


And if you want to script the setting, here’s the code for the CLI:

/subsystem=datasources/data-source=MyDS:write-attribute(name=validate-on-match,value=true)
/subsystem=datasources/data-source=MyDS:write-attribute(name=check-valid-connection-sql,value="select 1")


The above settings make JBoss validate the connection on every new request (validate-on-match). But you can also have it do the checking in the background, e.g. every few seconds. And you need to consider the correct setting for your database. I use MariaDB/MySQL for my application and the exception-sorter and check-valid-connection-sql might be different depending on your database (e.g. org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker and select 1 from dual for Oracle). Take a look at the available parameters here: Database Connection Validation JBoss EAP 7 – Configuration Guide

## How to find the physical file path of the current FUSER of a Natural runtime

Here’s a short subroutine for reading the physical file path of the current FUSER of a Natural (from Software AG) runtime. I’m not sure if it works on a mainframe, but it definitely runs on a Linux system.

The subroutine returns the following information, if it runs successfully:

P-FUSER-PATH /home/macke/fuser
P-RC 0


Otherwise the return code P-RC will have a value other than zero.

It uses two user exits:

• USR6006N: Get path to system file
• USR2013N: Get SYSPROF information

USR2013N reads the information about the current FUSER and returns its DB-ID and File Number. And USR6006L takes these two inputs and returns the physical file path of the FUSER.

## Subroutine GET-CURRENT-FUSER-PATH

**************************************************************************
*
*  File: GET-CURRENT-FUSER-PATH (VNGFUPAT)
*
*  Reads the physical file path for the current FUSER.
*
*  Tags: FUSER, UserExit
*
*  Parameters:
*    -
*
*  Returns:
*    P-FUSER-PATH - File path for the current FUSER.
*    P-RC - Return code
*
**************************************************************************
DEFINE DATA
*
PARAMETER
*
01 P-RC (I4) BY VALUE RESULT
01 P-FUSER-PATH (A) DYNAMIC BY VALUE RESULT
*
LOCAL
*
* Get path to system file
01 USR6006L
02 INPUTS
03 SYSF-DBID (I4)
03 SYSF-FNR (I4)
02 OUTPUTS
03 SYSF-PATH (A253)
03 RESPONSE-CODE (I4)
03 INFOTEXT (A65)
01 EXTENSIONS (A1/1:1)
*
* Get SYSPROF information
01 USR2013L
02 OUTPUTS
03 FILENAME (A12/1:50)
03 DBID (P5/1:50)
03 FNR (P5/1:50)
03 DBNAME (A11/1:50)
03 AMOUNT (P4)
*
01 #INDEX (I4)
*
01 #FUSER-DBID (N8)
01 #FUSER-FNR (N8)
*
END-DEFINE
*
DEFINE SUBROUTINE GET-CURRENT-FUSER-PATH
*
RESET P-FUSER-PATH P-RC USR6006L USR2013L EXTENSIONS(*) #FUSER-DBID #FUSER-FNR
*
CALLNAT 'USR2013N'  USR2013L
*
FOR #INDEX = 1 TO USR2013L.AMOUNT
IF USR2013L.FILENAME(#INDEX) EQ 'FUSER'
#FUSER-DBID := USR2013L.DBID(#INDEX)
#FUSER-FNR  := USR2013L.FNR(#INDEX)
END-IF
END-FOR
*
IF #FUSER-DBID EQ 0 OR #FUSER-FNR EQ 0
P-RC := 1
ESCAPE MODULE
END-IF
*
USR6006L.SYSF-DBID := #FUSER-DBID
USR6006L.SYSF-FNR  := #FUSER-FNR
*
CALLNAT 'USR6006N' USR6006L EXTENSIONS(*)
*
P-FUSER-PATH := USR6006L.SYSF-PATH
P-RC := USR6006L.RESPONSE-CODE
*
END-SUBROUTINE
*
END


## How to combine/join file paths in Gradle/Groovy

One might think that joining (or combining) two file paths together with Groovy would be an easy thing to do. I’m used to “nice” methods from Ruby like File.join (see How to do a safe join pathname in ruby?):

File.join("path", "to", "join")


As it turns out, there is no such method in Groovy. However, here are two easy ways to safely combine paths in Groovy (which I use in my Gradle build):

import java.nio.file.Paths // only needed for second example

def dir1 = "/the/path"
def dir2 = "to/join"

println new File(dir1, dir2)
println Paths.get(dir1, dir2)
// -> both print "\the\path\to\join" (on my Windows machine)


After a few hours of trial and error and finally getting authentication against Google’s API working in Ruby I think it’s time for a blog post 😉

I had a simple (at least I thought it was simple) requirement: Reading the people in a user’s Google+ circles with Ruby. Because I use OmniAuth in my application, the user is already authenticated and I even have his access token stored in the database. However, it took me a few hours to find out how to use this token to access Google’s API.

Reading the people from a user’s circle is quite easy. Simply use plus.people.list. You can try it with the Google APIs Explorer. However, you need to make sure, your app requests the right permissions. This is the corresponding line from my Rails initializer omniauth.rb (I needed to add the scope plus.login and change the access type to offline):

provider :google_oauth2, "CLIENTID", "SECRET", scope: 'profile,email,plus.login', image_aspect_ratio: 'square', image_size: 48, access_type: 'offline', name: 'google'


The Ruby code using google-api-ruby-client looks like this:

plus = Google::Apis::PlusV1::PlusService.new
friends = plus.list_people("me", "visible").items


The items you get from Google+ look like this:

#<Google::Apis::PlusV1::Person:0x4307fd0
@display_name="The Name",
@etag="\some tag\"",
@id="1234",
@kind="plus#person",
@object_type="person",


Google’s documentation states that you need to use Signet to authenticate against the Google API. So I thought I’d give it a try and started coding. But as it turns out, Signet is a complex beast and I didn’t want to re-implement the whole OAuth authentication process, because OmniAuth already does that for me. I just wanted to use my existing token for authentication!

Long story short: I found the solution in file http_command.rb in method apply_request_options():

if options.authorization.respond_to?(:apply!)
elsif options.authorization.is_a?(String)
end


You can simply set the attribute authorization of your PlusService to a string (instead of a Signet object) and it will be set as the Bearer in the HTTP request to Google’s API. So, I simply had to add this line to my calling code and I was done:

plus.authorization = access_token


# The final code

I still can’t believe that the final solution is so simple! 🙂

Gemfile:

gem 'google-api-client', '~> 0.9'


omniauth.rb:

provider :google_oauth2, "CLIENTID", "SECRET", scope: 'profile,email,plus.login', image_aspect_ratio: 'square', image_size: 48, access_type: 'offline', name: 'google'


friend_reader.rb:

require 'google/apis/plus_v1'

plus.authorization = access_token
friends = plus.list_people("me", "visible").items


## How to enable access logging (accesslog) in JBoss EAP 7

Configuring JBoss EAP 7 to write an access log (e.g. like Apache webserver) is quite easy with the CLI:

/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add


If you need any additional configuration, take a look at this: Wildfly 10.0.0.Final Model Reference.

For example, to change the prefix of the log’s file name:

/subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=prefix, value="my_access")


Alternatively, you could change the configuration in the config XML (e.g. standalone.xml):

<subsystem xmlns="urn:jboss:domain:undertow:3.1">
...
<server name="default-server">
...
<host name="default-host" alias="localhost">
...
<access-log prefix="my_access" />
...
</host>
</server>
...
</subsystem>


JBoss will now write every access to the application to accesslog in the log directory (e.g. JBOSS_HOME\standalone\log):

192.168.1.1 - - [23/Jun/2016:13:29:24 +0200] "GET /MyApp/MyPath/MyFile.xhtml HTTP/1.1" 200 5738


## How to seed the database with sample data for an Arquillian test

Arquillian makes it easy to test your Java EE application on a real server. However, if you deploy an application, that uses a database to perform its tasks, how do you make sure a database with test entries is available on the target server?

If you set up a “real” database on the server, the tests become hard to understand, because the data that’s used in assertions is not visible directly in the test. You have to know the content of the database to make sense of the tests. In addition, you’ll have to maintain the database and keep the data in a consistent state.

Instead, I would like my tests to create the test data themselves, so that I can see everything I need to know to understand the tests in one place and I don’t have to maintain a database system.

# Using EntityManager

My first attempt was to inject an Entity Manager into my Arquillian test and setup the test data like this:

@RunWith(Arquillian.class)
public class IntegrationTest
{
@PersistenceContext
private EntityManager em;

@Resource
private UserTransaction userTransaction;

private User user;

@Deployment
public static WebArchive createDeployment()
{
return ShrinkWrap
.create(WebArchive.class)
...
}

@Before
public void setup() throws Exception
{
userTransaction.begin();
em.persist(user);
userTransaction.commit();
}

@Test
public void existingUserShouldBeFound()
{
assertThat(
em.find(User.class, "myuser"),
is(user));
}
}


The file arquillian/persistence.xml looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="myPU">
<jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
<properties>
<property name="hibernate.hbm2ddl.auto" value="create-drop" />
</properties>
</persistence-unit>
</persistence>


It simply uses the default datasource (of JBoss EAP in my case) of the application server. So I don’t even have to setup a new datasource for my tests. Hibernate drops and recreates the tables during each deployment, so the tests start with a fresh database.

This test works great, as long as you run it inside the container. The EntityManager only gets injected into the test, if you deploy it to the application server. However, if you want to test your application from the outside, em will be null.

@Test
@RunAsClient
public void existingUserShouldBeFound()
{
// some assertions against the API
}


If you add @RunAsClient to the test method or set the deployment to @Deployment(testable = false), the test is run outside the container (e.g. if you want to test a REST API). Dependency injection doesn’t work in this case and the test fails:

java.lang.NullPointerException
at it.macke.arquillian.api.IntegrationTest.setup(IntegrationTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
...


# Using an initialization observer

If you use Java EE 7 (which I do), there’s an easy way to run code at the start of your application: observers. If you add an observer to your project, that gets called as soon as the application has initialized, you can execute all the needed setup code before the first test is run.

Here’s an example of my observer, that creates the test data in the database after the deployment to the remote server:

@ApplicationScoped
public class TestDataCreator
{
@PersistenceContext
private EntityManager em;

@Transactional
void createTestUser(
@Observes @Initialized(ApplicationScoped.class) final Object event)
{
User user = new User("myuser", "mypassword");
em.persist(user);
}
}


Of course, you need to include the observer in your deployment:

@Deployment
public static WebArchive createDeployment()
{
return ShrinkWrap
.create(WebArchive.class)
...
}


If you add some logging to TestDataCreator, you can see that it creates the test data after the deployment to the remote server:

19:28:27,589 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000227: Running hbm2ddl schema export
19:28:27,592 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000230: Schema export complete
...
...
19:28:28,051 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 143) WFLYUT0021: Registered web context: /2e36a3c3-be16-4e00-86c3-598ce822d286
19:28:28,101 INFO  [org.jboss.as.server] (management-handler-thread - 29) WFLYSRV0010: Deployed "2e36a3c3-be16-4e00-86c3-598ce822d286.war" (runtime-name : "2e36a3c3-be16-4e00-86c3-598ce822d286.war")


Now, my client test runs successfully, because the needed test data gets created as soon as the application is deployed.

## How to test a REST API with Arquillian

Testing a REST API on a real application server with Arquillian is easy – if you know what you need to do 😉

I have this simple REST service, that authenticates a user:

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response authenticateUser(final UserData userData)
{
...
return Response
.status(401)
.entity(false)
.build();
}


Let’s write a test for this REST API, that uses a real application server and calls the interface from the outside.

# Arquillian dependencies

The REST extensions for Arquillian make it fairly easy to test a deployed web application. Let’s start with their dependency in gradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon (or any other specific container, e.g. JBoss, Wildfly etc.)
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6',
// REST extensions
'org.jboss.arquillian.extension:arquillian-rest-client-impl-jersey:1.0.0.Alpha4'


# Arquillian test

I can now write a test against the REST interface on the server like this:

@Deployment
public static WebArchive createDeployment()
{
return ShrinkWrap
.create(WebArchive.class)
SessionResource.class.getPackage(),
...)
}

@Test
@RunAsClient
public void authenticateUser(
@ArquillianResteasyResource final WebTarget webTarget)
{
final Response response = webTarget
.path("/sessions")
.request(MediaType.APPLICATION_JSON)
.post(Entity.json(new UserData(
"myuser",
}


Note the @RunAsClient. This tells Arquillian to run the test as a client against the remote server and not within the remote server. The test class isn’t even deployed to the application server in this case (see Filters.exclude(".*Test.*")). That’s exactly what I needed, because I wanted to test the REST API from the outside, to make sure it’s available to its clients. Alternatively, you could set the whole deployment to @Deployment(testable = false), instead of configuring each test individually.

The REST extensions for Arquillian now call the test method with a WebTarget as a parameter. It points directly to the URL of the deployed web application, e.g. http://1.2.3.4:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. So you don’t have to manually create the target URL everytime.

# JBoss server configuration

It took me a while to find out how the target server – JBoss EAP 7 in my case – needs to be configured, to make this test pass, because the initial error message after my first attempt to run the test wasn’t very helpful at all:

javax.ws.rs.ProcessingException: java.net.ConnectException: Connection refused: connect
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:287)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:255)
...
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)


I could see in the server’s logs, that the deployment of the WAR file into JBoss was successful, so it had to be the REST client, that couldn’t connect to the API. A simple System.out.println(webTarget.getUri()); showed the problem: http://0.0.0.0:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. Arquillian uses the IP configuration of the target server and I had configured JBoss to listen on any of its interfaces (hence 0.0.0.0).

To fix this problem I made JBoss listen on the exact public IP address of the server by adding this to \standalone\configuration\standalone.xml. Be sure to use the real IP address and not 0.0.0.0 or <any-address />.

<interfaces>
<interface name="management">
</interface>
<interface name="public">
</interface>
</interfaces>


## How to test JBoss EAP 7 with Arquillian

It took me quite some time to get my Arquillian tests running against a remote JBoss EAP 7.0.0.Beta1 application server, so I thought I’d share my configuration.

At the time of this writing, there was no Arquillian container adapter for JBoss EAP 7 available. So I had to use the Arquillian Chameleon Container. However, it turns out, that Chameleon is even easier to configure than a specific container, because it more or less configures itself. Chameleon automatically downloads the needed container for you, if you tell it what application server you want to use.

# Java project

## Arquillian dependencies

Let’s start with the dependencies in appsgradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6'


## arquillian.xml

My arquillian.xml looks like this.

<arquillian xmlns="http://jboss.org/schema/arquillian"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
<container qualifier="chameleon" default="true">
<configuration>
<property name="chameleonTarget">jboss eap:7.0.0.Beta:remote</property>
<property name="managementPort">THE_PORT_IF_NOT_9990</property>
</configuration>
</container>
</arquillian>


The single line containing chameleonTarget defines the target server to be a remote JBoss EAP 7. That means, JBoss has to run on the target machine at the specified address or hostname and port. I chose this style of testing over the managed alternative, because it reduces the overhead of starting and stopping the server during tests. In addition, I think it’s a more realistic test, as the application gets deployed to a “real” server.

## Arquillian test

I can now write a test against the server like this:

@Deployment
public static WebArchive createDeployment()
{
return ShrinkWrap
.create(WebArchive.class)
}

@Test
public void authenticateUser()
{
// asserts
}


## persistence.xml

My persistence.xml for the Arquillian test simply uses JBoss’s default data source for the test. Hibernate creates and drops the tables before and after each test run.

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="myPU">
<jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
<properties>
<property name="hibernate.hbm2ddl.auto" value="create-drop" />
</properties>
</persistence-unit>
</persistence>


# Server configuration

To successfully run the test, quite some configuration on the server side was needed. Here’s what I had to do.

If you want to deploy to a remote JBoss server, you need a management user. If you test against a JBoss server on your local machine, this is not the case. The credentials have to be configured in arquillian.xml (see above).

C:\jboss-eap-7.0.0.Beta\bin>add-user.bat

What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): a

Enter the details of the new user to add.
Using realm 'ManagementRealm' as discovered from the existing property files.
Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.
- The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
Are you sure you want to use the password entered yes/no? yes
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]:
Is this correct yes/no? yes
Added user 'remote' to file 'C:\jboss-eap-7.0.0.Beta\standalone\configuration\mgmt-users.properties'
Added user 'remote' to file 'C:\jboss-eap-7.0.0.Beta\domain\configuration\mgmt-users.properties'
Added user 'remote' with groups  to file 'C:\jboss-eap-7.0.0.Beta\standalone\configuration\mgmt-groups.properties'
Added user 'remote' with groups  to file 'C:\jboss-eap-7.0.0.Beta\domain\configuration\mgmt-groups.properties'
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? no


## Bind to the public IP address

To make JBoss listen on the public IP address of the server, add this to \standalone\configuration\standalone.xml. The default configuration only allows connections on 127.0.0.1, which of course won’t be available from the outside.

<interfaces>
<interface name="management">
</interface>
<interface name="public">
</interface>
</interfaces>


## Local repository

If you use a local repository like Artifactory (perhaps because you are behind a proxy server like me), you need to configure Maven to use this repository. Arquillian Chameleon downloads its needed artifacts directly during the test (and not during the build, where your individual repository would be configured in the build file) and uses whatever repository is configured as central. In my case, Chameleon could not connect to repo1.maven.org to download its dependencies, so I had to configure the local repository in the central Maven configuration file ~\.m2\settings.xml like so:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">
<profiles>
<profile>
<id>artifactory</id>
<repositories>
<repository>
<id>central</id>
<url>http://artifactory.intranet/java-repos</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>artifactory</activeProfile>
</activeProfiles>
</settings>