How to seed the database with sample data for an Arquillian test

Arquillian makes it easy to test your Java EE application on a real server. However, if you deploy an application, that uses a database to perform its tasks, how do you make sure a database with test entries is available on the target server?

If you set up a “real” database on the server, the tests become hard to understand, because the data that’s used in assertions is not visible directly in the test. You have to know the content of the database to make sense of the tests. In addition, you’ll have to maintain the database and keep the data in a consistent state.

Instead, I would like my tests to create the test data themselves, so that I can see everything I need to know to understand the tests in one place and I don’t have to maintain a database system.

Using EntityManager

My first attempt was to inject an Entity Manager into my Arquillian test and setup the test data like this:

@RunWith(Arquillian.class)
public class IntegrationTest
{
    @PersistenceContext
    private EntityManager em;

    @Resource
    private UserTransaction userTransaction;

    private User user;

    @Deployment
    public static WebArchive createDeployment()
    {
        return ShrinkWrap
                .create(WebArchive.class)
                .addClass(User.class)
                ...
                .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
                .addAsResource("arquillian/persistence.xml", "META-INF/persistence.xml");
    }

    @Before
    public void setup() throws Exception
    {
        user = new User("myuser", "mypassword");
        userTransaction.begin();
        em.persist(user);
        userTransaction.commit();
    }

    @Test
    public void existingUserShouldBeFound()
    {
        assertThat(
                em.find(User.class, "myuser"),
                is(user));
    }
}

The file arquillian/persistence.xml looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="myPU">
        <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
        <properties>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
        </properties>
    </persistence-unit>
</persistence>

It simply uses the default datasource (of JBoss EAP in my case) of the application server. So I don’t even have to setup a new datasource for my tests. Hibernate drops and recreates the tables during each deployment, so the tests start with a fresh database.

This test works great, as long as you run it inside the container. The EntityManager only gets injected into the test, if you deploy it to the application server. However, if you want to test your application from the outside, em will be null.

@Test
@RunAsClient
public void existingUserShouldBeFound()
{
    // some assertions against the API
}

If you add @RunAsClient to the test method or set the deployment to @Deployment(testable = false), the test is run outside the container (e.g. if you want to test a REST API). Dependency injection doesn’t work in this case and the test fails:

java.lang.NullPointerException
    at it.macke.arquillian.api.IntegrationTest.setup(IntegrationTest.java:78)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    ...

Using an initialization observer

If you use Java EE 7 (which I do), there’s an easy way to run code at the start of your application: observers. If you add an observer to your project, that gets called as soon as the application has initialized, you can execute all the needed setup code before the first test is run.

Here’s an example of my observer, that creates the test data in the database after the deployment to the remote server:

@ApplicationScoped
public class TestDataCreator
{
    @PersistenceContext
    private EntityManager em;

    @Transactional
    void createTestUser(
            @Observes @Initialized(ApplicationScoped.class) final Object event)
    {
        User user = new User("myuser", "mypassword");
        em.persist(user);
    }
}

Of course, you need to include the observer in your deployment:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addClass(TestDataCreator.class)
        ...
}

If you add some logging to TestDataCreator, you can see that it creates the test data after the deployment to the remote server:

19:28:27,589 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000227: Running hbm2ddl schema export
19:28:27,592 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000230: Schema export complete
...
19:28:27,916 INFO  [it.macke.arquillian.setup.TestDataCreator] (ServerService Thread Pool -- 143) Adding test user: User{username=myuser, password=mypassword}
...
19:28:28,051 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 143) WFLYUT0021: Registered web context: /2e36a3c3-be16-4e00-86c3-598ce822d286
19:28:28,101 INFO  [org.jboss.as.server] (management-handler-thread - 29) WFLYSRV0010: Deployed "2e36a3c3-be16-4e00-86c3-598ce822d286.war" (runtime-name : "2e36a3c3-be16-4e00-86c3-598ce822d286.war")

Now, my client test runs successfully, because the needed test data gets created as soon as the application is deployed.

How to test a REST API with Arquillian

Testing a REST API on a real application server with Arquillian is easy – if you know what you need to do 😉

I have this simple REST service, that authenticates a user:

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response authenticateUser(final UserData userData)
{
    ...
    return Response
        .status(401)
        .entity(false)
        .build();
}

Let’s write a test for this REST API, that uses a real application server and calls the interface from the outside.

Arquillian dependencies

The REST extensions for Arquillian make it fairly easy to test a deployed web application. Let’s start with their dependency in gradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon (or any other specific container, e.g. JBoss, Wildfly etc.)
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6',
// REST extensions
'org.jboss.arquillian.extension:arquillian-rest-client-impl-jersey:1.0.0.Alpha4'

Arquillian test

I can now write a test against the REST interface on the server like this:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addPackages(true, Filters.exclude(".*Test.*"),
            SessionResource.class.getPackage(),
            ...)
        .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");
}

@Test
@RunAsClient
public void authenticateUser(
    @ArquillianResteasyResource final WebTarget webTarget)
{
    final Response response = webTarget
        .path("/sessions")
        .request(MediaType.APPLICATION_JSON)
        .post(Entity.json(new UserData(
            "myuser",
            "mypassword")));
    assertEquals(true, response.readEntity(Boolean.class));
}

Note the @RunAsClient. This tells Arquillian to run the test as a client against the remote server and not within the remote server. The test class isn’t even deployed to the application server in this case (see Filters.exclude(".*Test.*")). That’s exactly what I needed, because I wanted to test the REST API from the outside, to make sure it’s available to its clients. Alternatively, you could set the whole deployment to @Deployment(testable = false), instead of configuring each test individually.

The REST extensions for Arquillian now call the test method with a WebTarget as a parameter. It points directly to the URL of the deployed web application, e.g. http://1.2.3.4:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. So you don’t have to manually create the target URL everytime.

JBoss server configuration

It took me a while to find out how the target server – JBoss EAP 7 in my case – needs to be configured, to make this test pass, because the initial error message after my first attempt to run the test wasn’t very helpful at all:

javax.ws.rs.ProcessingException: java.net.ConnectException: Connection refused: connect
    at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:287)
    at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:255)
    ...
Caused by: java.net.ConnectException: Connection refused: connect
    at java.net.DualStackPlainSocketImpl.connect0(Native Method)
    at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)

I could see in the server’s logs, that the deployment of the WAR file into JBoss was successful, so it had to be the REST client, that couldn’t connect to the API. A simple System.out.println(webTarget.getUri()); showed the problem: http://0.0.0.0:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. Arquillian uses the IP configuration of the target server and I had configured JBoss to listen on any of its interfaces (hence 0.0.0.0).

To fix this problem I made JBoss listen on the exact public IP address of the server by adding this to \standalone\configuration\standalone.xml. Be sure to use the real IP address and not 0.0.0.0 or <any-address />.

<interfaces>
    <interface name="management">
        <inet-address value="1.2.3.4"/>
    </interface>
    <interface name="public">
        <inet-address value="1.2.3.4"/>
    </interface>
</interfaces>

How to test JBoss EAP 7 with Arquillian

It took me quite some time to get my Arquillian tests running against a remote JBoss EAP 7.0.0.Beta1 application server, so I thought I’d share my configuration.

At the time of this writing, there was no Arquillian container adapter for JBoss EAP 7 available. So I had to use the Arquillian Chameleon Container. However, it turns out, that Chameleon is even easier to configure than a specific container, because it more or less configures itself. Chameleon automatically downloads the needed container for you, if you tell it what application server you want to use.

Java project

Arquillian dependencies

Let’s start with the dependencies in appsgradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6'

arquillian.xml

My arquillian.xml looks like this.

<arquillian xmlns="http://jboss.org/schema/arquillian"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
    <container qualifier="chameleon" default="true">
        <configuration>
            <property name="chameleonTarget">jboss eap:7.0.0.Beta:remote</property>
            <property name="managementAddress">THE_SERVERNAME_OR_IP</property>
            <property name="managementPort">THE_PORT_IF_NOT_9990</property>
            <property name="username">THE_USER</property>
            <property name="password">THE_PASSWORD</property>
        </configuration>
    </container>
</arquillian>

The single line containing chameleonTarget defines the target server to be a remote JBoss EAP 7. That means, JBoss has to run on the target machine at the specified address or hostname and port. I chose this style of testing over the managed alternative, because it reduces the overhead of starting and stopping the server during tests. In addition, I think it’s a more realistic test, as the application gets deployed to a “real” server.

Arquillian test

I can now write a test against the server like this:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addPackage(MyClass.class.getPackage())
        .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
        .addAsResource("arquillian/persistence.xml", "META-INF/persistence.xml");
}

@Test
public void authenticateUser()
{
    // asserts
}

persistence.xml

My persistence.xml for the Arquillian test simply uses JBoss’s default data source for the test. Hibernate creates and drops the tables before and after each test run.

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="myPU">
        <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
        <properties>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
        </properties>
    </persistence-unit>
</persistence>

Server configuration

To successfully run the test, quite some configuration on the server side was needed. Here’s what I had to do.

Add a management user

If you want to deploy to a remote JBoss server, you need a management user. If you test against a JBoss server on your local machine, this is not the case. The credentials have to be configured in arquillian.xml (see above).

C:\jboss-eap-7.0.0.Beta\bin>add-user.bat

What type of user do you wish to add?
 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): a

Enter the details of the new user to add.
Using realm 'ManagementRealm' as discovered from the existing property files.
Username : remote
Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.
 - The password should be different from the username
 - The password should not be one of the following restricted values {root, admin, administrator}
 - The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
Password :
WFLYDM0098: The password should be different from the username
Are you sure you want to use the password entered yes/no? yes
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]:
About to add user 'remote' for realm 'ManagementRealm'
Is this correct yes/no? yes
Added user 'remote' to file 'C:\jboss-eap-7.0.0.Beta\standalone\configuration\mgmt-users.properties'
Added user 'remote' to file 'C:\jboss-eap-7.0.0.Beta\domain\configuration\mgmt-users.properties'
Added user 'remote' with groups  to file 'C:\jboss-eap-7.0.0.Beta\standalone\configuration\mgmt-groups.properties'
Added user 'remote' with groups  to file 'C:\jboss-eap-7.0.0.Beta\domain\configuration\mgmt-groups.properties'
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? no

Bind to the public IP address

To make JBoss listen on the public IP address of the server, add this to \standalone\configuration\standalone.xml. The default configuration only allows connections on 127.0.0.1, which of course won’t be available from the outside.

<interfaces>
    <interface name="management">
        <any-address />
    </interface>
    <interface name="public">
        <any-address />
    </interface>
</interfaces>

Local repository

If you use a local repository like Artifactory (perhaps because you are behind a proxy server like me), you need to configure Maven to use this repository. Arquillian Chameleon downloads its needed artifacts directly during the test (and not during the build, where your individual repository would be configured in the build file) and uses whatever repository is configured as central. In my case, Chameleon could not connect to repo1.maven.org to download its dependencies, so I had to configure the local repository in the central Maven configuration file ~\.m2\settings.xml like so:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>artifactory</id>
            <repositories>
                <repository>
                    <id>central</id>
                    <url>http://artifactory.intranet/java-repos</url>
                    <snapshots>
                        <enabled>false</enabled>
                    </snapshots>
                </repository>
            </repositories>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>artifactory</activeProfile>
    </activeProfiles>
</settings>

Test everything! – Lessons Learned from SOA-fying a Monolith

Another lesson we learned while making our legacy application ready for a service-oriented architecture, is this:

Test everything.

Test everything

When I started out writing Flow services in webMethods Integration Server (IS), there was no (nice) way of automatically testing them. Although we were told multiple times by consultants, that there would be a test framework for IS, we never got the actual code. So, I had to develop a test framework myself, simply to be sure that everything still worked as before after a deployment of IS.

The result of my development effort are two small Java frameworks:

I wanted to be able to test IS services (Flow, Java, etc.) with the established tool we already used in our projects: JUnit. However, Integration Server’s Java interface relies on IData, the internal representation of IS’s pipeline. And working with it can get pretty annoying, because it’s nothing more than a big hash table with its own API. So you would have to deconstruct your Java objects into the structure of IData every time you call a service and compose them back together when you get the results, only to be able to call an assertion on it.

ao-idata-converter

My solution for this problem is a project called ao-idata-converter. It takes any Plain Old Java Object (POJO) and converts it to IData with a bit of reflection magic. You can even use beans with Getters and Setters and map attribute names to different fields in the pipeline. So, with ao-idata-converter you can get rid of all the converting from and to IData in your code.

IData convertedObject =
    new ObjectConverter().convertToIData("address", addressObject);

ao-integrationserver

The next problem I faced, was the need to use lots of boilerplate code to be able to call an IS service. If you generate a Java client for a given IS service, the resulting class contains all the setup, authentication, input etc. you need to call the service. However, it’s not reusable and you’ll end up with lots of duplication if you want to call multiple services (which would be the default behaviour, I guess).

Therefore, I abstracted all the needed boilerplate code into a separate framework, ao-integrationserver, that provides an easy API for calling IS services on different endpoints with authentication and input/output parameters represented by simple POJOs. If you follow a certain naming convention for your Java packages and classes, you’ll be able to call an IS service by creating a single class with only a few lines of code. So, adding a new service to your Java library takes only a few minutes at most.

public class max extends Service<max.Input, max.Output>
{
    public static class Input extends ServiceInput
    {
        public String[] numList;
    }

    public static class Output extends ServiceOutput
    {
        public String maxValue;
    }
}

Our test suite

Below you can see a screenshot of our IS test suite in Eclipse. The test suite automatically runs on our Jenkins build server every time we deploy IS and we can point it to any stage (development, test, production) within a matter of seconds to make sure that all services still work as expected.

Screenshot of our test suite for Integration Server

If you would like to know more about the two frameworks or our deployment pipeline, feel free to contact me any time. If you would like to participate in the development of the frameworks, I would also love to hear from you (e.g. via a Pull Request).

How to test JMS processing in webMethods/Terracotta Universal Messaging and Integration Server with SoapUI and HermesJMS

Universal Messaging is the default messaging component used by webMethods Integration Server. Here is a short tutorial how you can test JMS processing using SoapUI and HermesJMS.

  • SOAPUIDIR points to the installation directory of SoapUI, e.g. C:\Program Files\SmartBear\SoapUI-5.1.3.
  • NIRVANADIR points to the installation directory of Nirvana, e.g. C:\SoftwareAG\nirvana.

Setup Universal Messaging

  • First of all you need to create the needed artifacts in your Universal Messaging realm. Start with the JNDI Provider URL and click Apply.
  • Then add a Connection Factory, Topic Connection Factory, and Topic: Create artifacts in Universal Messaging

Setup Integration Server

  • Create a JNDI Provider Alias for Universal Messaging under Settings > Messaging > JNDI Settings: Create JNDI Alias in Integration Server
  • You can now test the alias and should see the artifacts you created in Universal Messaging: Test JNDI Alias in Integration Server
  • Create a JMS Connection Alias for the JNDI Alias under Settings > Messaging > JMS Settings. Use the corresponding values from Universal Messaging for JNDI Provider Alias Name and Connection Factory Lookup Name: Create JMS Connection Alias in Integration Server
  • Enable the Connection Alias: Test JMS Connection Alias in Integration Server

Setup SoapUI/HermesJMS

  • Copy the following JARs to SOAPUIDIR\hermesJMS\lib: NIRVANADIR\lib\jndi.jar, NIRVANADIR\lib\nClient.jar, NIRVANADIR\lib\nJ2EE.jar, NIRVANADIR\lib\nJMS.jar.
  • Create a new session named IS and add all above JARs to a new classpath group named IS: Add provider JARs for Universal Messaging to HermesJMS
    Click Apply and restart HermesJMS.
  • You should now be able to select IS under Loader and hermes.JNDIConnectionFactory under Class: Configure session in HermesJMS
    Add the properties host, port, initialContextFactory, providerURL, and binding under Connection Factory and configure them according to your environment. You can find the needed values in the JNDI settings of Universal Messaging: JNDI properties in HermesJMS
  • You should now be able to discover queues and topics for the session: Discover queues and topics with HermesJMS
  • To test the subscription to a topic, you can now browse the topic: Browse topic with HermesJMS
  • If you send a test message with Software AG Designer, you should see the message in HermesJMS: Send a JMS test message with Integration Server

    Browse the JMS messages in HermesJMS

Possible errors

  • hermes.HermesException: The binding property to locate the ConnectionFactory in the Context is not set: hermes.HermesException: The binding property to locate the ConnectionFactory in the Context is not set
    Add the property binding under Connection Factory in the session preferences (right-click on the session and Edit).

Additional Links

Unit-testing Flow Services in webMethods’ Integration Server with JUnit

Today, I built a small test bed for unit-testing Flow Services in webMethods’ Integration Server. If you develop Java Services for IS and follow the best practice of “no business logic in the service layer”, you can unit-test your logic in isolation in a plain old Java environment and deploy the library to IS, which then only acts as the service bus and simply forwards service calls to the library. However, if you create Flow Services directly in IS, which can get complicated pretty fast, I found no built-in tool for testing their logic. Here’s an example of a small Flow Service:

webMethods Integration Server Flow Service

So I went ahead and created a local Java project, in which I could unit-test the remote services. I know network access in unit-tests is considered a “boundary” which should not be crossed, because tests run slower and may fail due to network problems. However, I would rather have a suite of (relatively) slow running tests for my important services than have no tests at all and see programs fail due to regression. And debugging services in IS is no fun at all! Calling the IS services directly from Java is surprisingly easy. Software AG Designer generates all the needed source code for you. Right-click on the service and choose Generate Code. Then follow the steps in the wizard:

Calling IS Service Wizard 1

Calling IS Service Wizard 2

Calling IS Service Wizard 3

The generated code is executable directly, as long as you add references to IS’ client.jar and enttoolkit.jar to your Java project.

Testing IS Services References

The service call itself is implemented in this simple method:

public static IData invoke(Context context, IData inputDocument) 
    throws IOException, ServiceException
{
     IData out = context.invoke("PACKAGE", "NAME", inputDocument);
     IData outputDocument = out;
     return outputDocument;
}

However, as you can see above, the service accepts and returns IData objects, whose construction is not that easy and should not be scattered throughout your test code. Even providing a simple value requires at least the following code. And the code for getting the value out of IData to be able to call an assertion on it looks similar.

IData out = IDataFactory.create();
IDataCursor idc = out.getCursor();
idc.insertAfter("errorFlag", "true");
idc.destroy();

So, to make life easier for me, I spent some time and developed a small environment, in which I can work with plain old Java objects for service input and output in my tests and assert against the public fields of these objects, like this example of a simple date conversion service demonstrates:

protected ConvertDateToN8 sut;
protected ConvertDateToN8.Input input;

@Test
public void shouldConvertValidDate() throws Exception
{
    input.Date = "2008-10-15";
    ConvertDateToN8.Output output = sut.call(input);
    assertThat(output.DateN8, is("20081015"));
}

The service class looks like this:

public class ConvertDateToN8 extends ServiceCall<ConvertDateToN8.Input, ConvertDateToN8.Output>
{
    public class Input extends ServiceInput
    {
        public String Date;
    }

    public class Output extends ServiceOutput
    {
        public String DateN8;
    }

    public ConvertDateToN8(String host)
    {
        super(host, "Common", "ConvertDateToN8");
    }

    public Input createInput()
    {
        return new Input();
    }

    @Override
    protected Output createOutput()
    {
        return new Output();
    }
}

The only thing that needs to be added to the project to test another service is a class like the one above. I can simply define the input and output of the service in nested classes and everything else is magically built with generics and reflection. Here is an example of converting the generic Input object to IData in class ServiceCall:

protected IData createPipelineFromInput(ServiceInput input)
{
    IData pipeline = IDataFactory.create();
    IDataCursor idc = pipeline.getCursor();

    Class<?> c = input.getClass();
    for (Field publicField : c.getFields())
    {
        try
        {
            String fieldName = FieldNameConverter.convertToPipelineName(publicField.getName());
            Object fieldValue = FieldConverter.convertToPipeline(publicField.getType().getName(), publicField.get(input));
            IDataUtil.put(idc, fieldName, fieldValue);
        }
        catch (Exception e)
        {
            e.printStackTrace();
        }
    }
    idc.destroy();

    return pipeline;
}

Even more complex data structures like objects or arrays can be constructed by extending the classes FieldConverter and PipelineConverter which encapsulate the access to IData:

public class Input extends ServiceInput
{
    public String errorFlag;
    public String errorCode;
    public String errorMessage;
    public NaturalErrorInfo ERROR_INFO;
}

You can take a look at the implementation for converting the above input to and from IData in the source code, which can be downloaded below. Feel free to take a look at it and contact me for questions or additions!

Download