How to use an existing access token to authenticate against the Google API in Ruby

After a few hours of trial and error and finally getting authentication against Google’s API working in Ruby I think it’s time for a blog post 😉

I had a simple (at least I thought it was simple) requirement: Reading the people in a user’s Google+ circles with Ruby. Because I use OmniAuth in my application, the user is already authenticated and I even have his access token stored in the database. However, it took me a few hours to find out how to use this token to access Google’s API.

Reading people from a Google+ user’s circles

Reading the people from a user’s circle is quite easy. Simply use plus.people.list. You can try it with the Google APIs Explorer. However, you need to make sure, your app requests the right permissions. This is the corresponding line from my Rails initializer omniauth.rb (I needed to add the scope plus.login and change the access type to offline):

provider :google_oauth2, "CLIENTID", "SECRET", scope: 'profile,email,plus.login', image_aspect_ratio: 'square', image_size: 48, access_type: 'offline', name: 'google'

The Ruby code using google-api-ruby-client looks like this:

plus = Google::Apis::PlusV1::PlusService.new
friends = plus.list_people("me", "visible").items

The items you get from Google+ look like this:

#<Google::Apis::PlusV1::Person:0x4307fd0 
    @display_name="The Name",
    @etag="\some tag\"",
    @id="1234",
    @image=#<Google::Apis::PlusV1::Person::Image:0x43057f8 
        @url="https://lh6.googleusercontent.com/.../photo.jpg?sz=50">,
    @kind="plus#person",
    @object_type="person",
    @url="https://plus.google.com/1234">

Using an existing access token for authentication

Google’s documentation states that you need to use Signet to authenticate against the Google API. So I thought I’d give it a try and started coding. But as it turns out, Signet is a complex beast and I didn’t want to re-implement the whole OAuth authentication process, because OmniAuth already does that for me. I just wanted to use my existing token for authentication!

Long story short: I found the solution in file http_command.rb in method apply_request_options():

if options.authorization.respond_to?(:apply!)
    options.authorization.apply!(req.header)
elsif options.authorization.is_a?(String)
   req.header[:authorization] = sprintf('Bearer %s', options.authorization)
end

You can simply set the attribute authorization of your PlusService to a string (instead of a Signet object) and it will be set as the Bearer in the HTTP request to Google’s API. So, I simply had to add this line to my calling code and I was done:

plus.authorization = access_token

The final code

I still can’t believe that the final solution is so simple! 🙂

Gemfile:

gem 'google-api-client', '~> 0.9'

omniauth.rb:

provider :google_oauth2, "CLIENTID", "SECRET", scope: 'profile,email,plus.login', image_aspect_ratio: 'square', image_size: 48, access_type: 'offline', name: 'google'

friend_reader.rb:

require 'google/apis/plus_v1'

plus = Google::Apis::PlusV1::PlusService.new
plus.authorization = access_token
friends = plus.list_people("me", "visible").items

How to enable access logging (accesslog) in JBoss EAP 7

Configuring JBoss EAP 7 to write an access log (e.g. like Apache webserver) is quite easy with the CLI:

/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add

If you need any additional configuration, take a look at this: Wildfly 10.0.0.Final Model Reference.

For example, to change the prefix of the log’s file name:

/subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=prefix, value="my_access")

Alternatively, you could change the configuration in the config XML (e.g. standalone.xml):

<subsystem xmlns="urn:jboss:domain:undertow:3.1">
    ...
    <server name="default-server">
        ...
        <host name="default-host" alias="localhost">
            ...
            <access-log prefix="my_access" />
            ...
        </host>
    </server>
    ...
</subsystem>

JBoss will now write every access to the application to accesslog in the log directory (e.g. JBOSS_HOME\standalone\log):

192.168.1.1 - - [23/Jun/2016:13:29:24 +0200] "GET /MyApp/MyPath/MyFile.xhtml HTTP/1.1" 200 5738

How to seed the database with sample data for an Arquillian test

Arquillian makes it easy to test your Java EE application on a real server. However, if you deploy an application, that uses a database to perform its tasks, how do you make sure a database with test entries is available on the target server?

If you set up a “real” database on the server, the tests become hard to understand, because the data that’s used in assertions is not visible directly in the test. You have to know the content of the database to make sense of the tests. In addition, you’ll have to maintain the database and keep the data in a consistent state.

Instead, I would like my tests to create the test data themselves, so that I can see everything I need to know to understand the tests in one place and I don’t have to maintain a database system.

Using EntityManager

My first attempt was to inject an Entity Manager into my Arquillian test and setup the test data like this:

@RunWith(Arquillian.class)
public class IntegrationTest
{
    @PersistenceContext
    private EntityManager em;

    @Resource
    private UserTransaction userTransaction;

    private User user;

    @Deployment
    public static WebArchive createDeployment()
    {
        return ShrinkWrap
                .create(WebArchive.class)
                .addClass(User.class)
                ...
                .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
                .addAsResource("arquillian/persistence.xml", "META-INF/persistence.xml");
    }

    @Before
    public void setup() throws Exception
    {
        user = new User("myuser", "mypassword");
        userTransaction.begin();
        em.persist(user);
        userTransaction.commit();
    }

    @Test
    public void existingUserShouldBeFound()
    {
        assertThat(
                em.find(User.class, "myuser"),
                is(user));
    }
}

The file arquillian/persistence.xml looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="myPU">
        <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
        <properties>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
        </properties>
    </persistence-unit>
</persistence>

It simply uses the default datasource (of JBoss EAP in my case) of the application server. So I don’t even have to setup a new datasource for my tests. Hibernate drops and recreates the tables during each deployment, so the tests start with a fresh database.

This test works great, as long as you run it inside the container. The EntityManager only gets injected into the test, if you deploy it to the application server. However, if you want to test your application from the outside, em will be null.

@Test
@RunAsClient
public void existingUserShouldBeFound()
{
    // some assertions against the API
}

If you add @RunAsClient to the test method or set the deployment to @Deployment(testable = false), the test is run outside the container (e.g. if you want to test a REST API). Dependency injection doesn’t work in this case and the test fails:

java.lang.NullPointerException
    at it.macke.arquillian.api.IntegrationTest.setup(IntegrationTest.java:78)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    ...

Using an initialization observer

If you use Java EE 7 (which I do), there’s an easy way to run code at the start of your application: observers. If you add an observer to your project, that gets called as soon as the application has initialized, you can execute all the needed setup code before the first test is run.

Here’s an example of my observer, that creates the test data in the database after the deployment to the remote server:

@ApplicationScoped
public class TestDataCreator
{
    @PersistenceContext
    private EntityManager em;

    @Transactional
    void createTestUser(
            @Observes @Initialized(ApplicationScoped.class) final Object event)
    {
        User user = new User("myuser", "mypassword");
        em.persist(user);
    }
}

Of course, you need to include the observer in your deployment:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addClass(TestDataCreator.class)
        ...
}

If you add some logging to TestDataCreator, you can see that it creates the test data after the deployment to the remote server:

19:28:27,589 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000227: Running hbm2ddl schema export
19:28:27,592 INFO  [org.hibernate.tool.hbm2ddl.SchemaExport] (ServerService Thread Pool -- 140) HHH000230: Schema export complete
...
19:28:27,916 INFO  [it.macke.arquillian.setup.TestDataCreator] (ServerService Thread Pool -- 143) Adding test user: User{username=myuser, password=mypassword}
...
19:28:28,051 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 143) WFLYUT0021: Registered web context: /2e36a3c3-be16-4e00-86c3-598ce822d286
19:28:28,101 INFO  [org.jboss.as.server] (management-handler-thread - 29) WFLYSRV0010: Deployed "2e36a3c3-be16-4e00-86c3-598ce822d286.war" (runtime-name : "2e36a3c3-be16-4e00-86c3-598ce822d286.war")

Now, my client test runs successfully, because the needed test data gets created as soon as the application is deployed.

How to test a REST API with Arquillian

Testing a REST API on a real application server with Arquillian is easy – if you know what you need to do 😉

I have this simple REST service, that authenticates a user:

@POST
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public Response authenticateUser(final UserData userData)
{
    ...
    return Response
        .status(401)
        .entity(false)
        .build();
}

Let’s write a test for this REST API, that uses a real application server and calls the interface from the outside.

Arquillian dependencies

The REST extensions for Arquillian make it fairly easy to test a deployed web application. Let’s start with their dependency in gradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon (or any other specific container, e.g. JBoss, Wildfly etc.)
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6',
// REST extensions
'org.jboss.arquillian.extension:arquillian-rest-client-impl-jersey:1.0.0.Alpha4'

Arquillian test

I can now write a test against the REST interface on the server like this:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addPackages(true, Filters.exclude(".*Test.*"),
            SessionResource.class.getPackage(),
            ...)
        .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");
}

@Test
@RunAsClient
public void authenticateUser(
    @ArquillianResteasyResource final WebTarget webTarget)
{
    final Response response = webTarget
        .path("/sessions")
        .request(MediaType.APPLICATION_JSON)
        .post(Entity.json(new UserData(
            "myuser",
            "mypassword")));
    assertEquals(true, response.readEntity(Boolean.class));
}

Note the @RunAsClient. This tells Arquillian to run the test as a client against the remote server and not within the remote server. The test class isn’t even deployed to the application server in this case (see Filters.exclude(".*Test.*")). That’s exactly what I needed, because I wanted to test the REST API from the outside, to make sure it’s available to its clients. Alternatively, you could set the whole deployment to @Deployment(testable = false), instead of configuring each test individually.

The REST extensions for Arquillian now call the test method with a WebTarget as a parameter. It points directly to the URL of the deployed web application, e.g. http://1.2.3.4:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. So you don’t have to manually create the target URL everytime.

JBoss server configuration

It took me a while to find out how the target server – JBoss EAP 7 in my case – needs to be configured, to make this test pass, because the initial error message after my first attempt to run the test wasn’t very helpful at all:

javax.ws.rs.ProcessingException: java.net.ConnectException: Connection refused: connect
    at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:287)
    at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:255)
    ...
Caused by: java.net.ConnectException: Connection refused: connect
    at java.net.DualStackPlainSocketImpl.connect0(Native Method)
    at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)

I could see in the server’s logs, that the deployment of the WAR file into JBoss was successful, so it had to be the REST client, that couldn’t connect to the API. A simple System.out.println(webTarget.getUri()); showed the problem: http://0.0.0.0:8080/044f5571-3b1a-4976-9baa-a64b56f2eec1/rest. Arquillian uses the IP configuration of the target server and I had configured JBoss to listen on any of its interfaces (hence 0.0.0.0).

To fix this problem I made JBoss listen on the exact public IP address of the server by adding this to \standalone\configuration\standalone.xml. Be sure to use the real IP address and not 0.0.0.0 or <any-address />.

<interfaces>
    <interface name="management">
        <inet-address value="1.2.3.4"/>
    </interface>
    <interface name="public">
        <inet-address value="1.2.3.4"/>
    </interface>
</interfaces>

How to test JBoss EAP 7 with Arquillian

It took me quite some time to get my Arquillian tests running against a remote JBoss EAP 7.0.0.Beta1 application server, so I thought I’d share my configuration.

At the time of this writing, there was no Arquillian container adapter for JBoss EAP 7 available. So I had to use the Arquillian Chameleon Container. However, it turns out, that Chameleon is even easier to configure than a specific container, because it more or less configures itself. Chameleon automatically downloads the needed container for you, if you tell it what application server you want to use.

Java project

Arquillian dependencies

Let’s start with the dependencies in appsgradle.build:

// main BOM for Arquillian
'org.jboss.arquillian:arquillian-bom:1.1.11.Final',
// JUnit container
'org.jboss.arquillian.junit:arquillian-junit-container:1.1.11.Final',
// Chameleon
'org.arquillian.container:arquillian-container-chameleon:1.0.0.Alpha6'

arquillian.xml

My arquillian.xml looks like this.

<arquillian xmlns="http://jboss.org/schema/arquillian"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
    <container qualifier="chameleon" default="true">
        <configuration>
            <property name="chameleonTarget">jboss eap:7.0.0.Beta:remote</property>
            <property name="managementAddress">THE_SERVERNAME_OR_IP</property>
            <property name="managementPort">THE_PORT_IF_NOT_9990</property>
            <property name="username">THE_USER</property>
            <property name="password">THE_PASSWORD</property>
        </configuration>
    </container>
</arquillian>

The single line containing chameleonTarget defines the target server to be a remote JBoss EAP 7. That means, JBoss has to run on the target machine at the specified address or hostname and port. I chose this style of testing over the managed alternative, because it reduces the overhead of starting and stopping the server during tests. In addition, I think it’s a more realistic test, as the application gets deployed to a “real” server.

Arquillian test

I can now write a test against the server like this:

@Deployment
public static WebArchive createDeployment()
{
    return ShrinkWrap
        .create(WebArchive.class)
        .addPackage(MyClass.class.getPackage())
        .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
        .addAsResource("arquillian/persistence.xml", "META-INF/persistence.xml");
}

@Test
public void authenticateUser()
{
    // asserts
}

persistence.xml

My persistence.xml for the Arquillian test simply uses JBoss’s default data source for the test. Hibernate creates and drops the tables before and after each test run.

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="myPU">
        <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>
        <properties>
            <property name="hibernate.hbm2ddl.auto" value="create-drop" />
        </properties>
    </persistence-unit>
</persistence>

Server configuration

To successfully run the test, quite some configuration on the server side was needed. Here’s what I had to do.

Add a management user

If you want to deploy to a remote JBoss server, you need a management user. If you test against a JBoss server on your local machine, this is not the case. The credentials have to be configured in arquillian.xml (see above).

C:\jboss-eap-7.0.0.Beta\bin>add-user.bat

What type of user do you wish to add?
 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): a

Enter the details of the new user to add.
Using realm 'ManagementRealm' as discovered from the existing property files.
Username : remote
Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.
 - The password should be different from the username
 - The password should not be one of the following restricted values {root, admin, administrator}
 - The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
Password :
WFLYDM0098: The password should be different from the username
Are you sure you want to use the password entered yes/no? yes
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]:
About to add user 'remote' for realm 'ManagementRealm'
Is this correct yes/no? yes
Added user 'remote' to file 'C:\jboss-eap-7.0.0.Beta\standalone\configuration\mgmt-users.properties'
Added user 'remote' to file 'C:\jboss-eap-7.0.0.Beta\domain\configuration\mgmt-users.properties'
Added user 'remote' with groups  to file 'C:\jboss-eap-7.0.0.Beta\standalone\configuration\mgmt-groups.properties'
Added user 'remote' with groups  to file 'C:\jboss-eap-7.0.0.Beta\domain\configuration\mgmt-groups.properties'
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? no

Bind to the public IP address

To make JBoss listen on the public IP address of the server, add this to \standalone\configuration\standalone.xml. The default configuration only allows connections on 127.0.0.1, which of course won’t be available from the outside.

<interfaces>
    <interface name="management">
        <any-address />
    </interface>
    <interface name="public">
        <any-address />
    </interface>
</interfaces>

Local repository

If you use a local repository like Artifactory (perhaps because you are behind a proxy server like me), you need to configure Maven to use this repository. Arquillian Chameleon downloads its needed artifacts directly during the test (and not during the build, where your individual repository would be configured in the build file) and uses whatever repository is configured as central. In my case, Chameleon could not connect to repo1.maven.org to download its dependencies, so I had to configure the local repository in the central Maven configuration file ~\.m2\settings.xml like so:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd">
    <profiles>
        <profile>
            <id>artifactory</id>
            <repositories>
                <repository>
                    <id>central</id>
                    <url>http://artifactory.intranet/java-repos</url>
                    <snapshots>
                        <enabled>false</enabled>
                    </snapshots>
                </repository>
            </repositories>
        </profile>
    </profiles>
    <activeProfiles>
        <activeProfile>artifactory</activeProfile>
    </activeProfiles>
</settings>

Using Gradle wrapper behind a proxy server with self-signed SSL certificates

Today, I wanted to add a Gradle Wrapper to my Java project but had a few issues. I am behind a proxy and it changes the SSL certificates to be able to scan traffic for viruses.

My first attempt to start gradlew build resulted in:

    Exception in thread "main" java.net.UnknownHostException: services.gradle.org
            at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
            at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
            at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
            at java.net.Socket.connect(Socket.java:589)
            at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
            at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
            ...

Gradle didn’t use the proxy server and tried to connect to the internet directly. This was solved by setting the proxy server in %GRADLE_USER_HOME%\gradle.properties (see Gradlew behind a proxy):

    systemProp.http.proxyHost=192.168.1.1
    systemProp.http.proxyPort=80
    systemProp.http.proxyUser=userid
    systemProp.http.proxyPassword=password
    systemProp.https.proxyHost=192.168.1.1
    systemProp.https.proxyPort=80
    systemProp.https.proxyUser=userid
    systemProp.https.proxyPassword=password

The next try lead to:

    Downloading https://services.gradle.org/distributions/gradle-2.11-bin.zip

    Exception in thread "main" javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
            at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
            at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
            at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
            at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
            at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
            ....

The reason for the SSLHandshakeException were the proxy’s selft-signed certificates, that could not be validated. I had to add them to the Java keystore (see Java: Ignore/Trust an invalid SSL cert for https communication and Cacerts default password? -> the default password for the Java keystore is changeit):

    "%JAVA_HOME%\bin\keytool" -import -trustcacerts -alias MY_ALIAS -file MY_CERT.crt -keystore "%JAVA_HOME%\jre\lib\security\cacerts"

Now, Gradle was able to connect to gradle.org to download the distribution. However, the proxy server would not let the ZIP file through:

    Exception in thread "main" java.io.IOException: Server returned HTTP response code: 403 for URL: https://downloads.gradle.org/distributions/gradle-2.11-bin.zip
            at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1840)
            at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
            at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
            ...

So I configured Gradle to “download” the ZIP file from the local hard drive in %GRADLE_USER_HOME%\gradle.properties (see How to use gradle zip in local system without downloading when using gradle-wrapper):

    distributionBase=GRADLE_USER_HOME
    distributionPath=wrapper/dists
    zipStoreBase=GRADLE_USER_HOME
    zipStorePath=wrapper/dists
    distributionUrl=gradle-2.11-bin.zip

I manually downloaded the distribution file and put it into %GRADLE_USER_HOME%\wrapper\dists\gradle-2.11-bin\[SOME_HASH]\.

And finally the build was successful! 😀

    D:\MY_PROJECT>gradlew build
    Unzipping D:\GradleUserHome\wrapper\dists\gradle-2.11-bin\452syho4l32rlk2s8ivdjogs8\gradle-2.11-bin.zip to D:\GradleUserHome\wrapper\dists\gradle-2.11-bin\452syho4l32rlk2s8ivdjogs8
    Starting a new Gradle Daemon for this build (subsequent builds will be faster).
    Parallel execution with configuration on demand is an incubating feature.
    :compileJava UP-TO-DATE
    ...

How to export all mapped environments from Natural Studio (SPoD)

A colleague of mine wanted to export all the mapped environments from Natural Studio (SPoD). As we have quite a lot of different environments due to our complex staging concept, manually re-creating this list would be cumbersome.

Map an environment in Natural Studio

Long story short: we didn’t find a way to export the environments from within Natural Studio. However, after a quick search I found this file: %PROGRAMDATA%\Software AG\Natural\6.3\Prof\%USERNAME%.PRU (which translates to C:\ProgramData\Software AG\Natural\6.3\Prof\MACKE.PRU in my case). It contains all the mapped environments in a text format:

Mapping2 = MAP THEHOST 2720 ALIAS=My environment; MACKE * * fuser=(22,123) ;CONNECTED=FALSE

Here’s a quick regular expression to filter the needed parts from the string:

MAP (.+) ([0-9]+) ALIAS=([^;]+); ([^ ]+) \* \* ([^;]*);.*
  • Group 1: Host name (THEHOST)
  • Group 2: Server port (2720)
  • Group 3: Environment name (My environment)
  • Group 4: User ID (MACKE)
  • Group 5: Session parameter (fuser=(22,123))

JsonParseException when calling a REST service with curl on Windows

When I called a REST service (provided by webMethods Integration Server) with curl on my Windows machine, I got the following error:

org.codehaus.jackson.JsonParseException:Unexpected character (''' (code 39)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')\n at [Source: com.wm.net.HttpInputStream@74356f93; line: 1, column: 2]

The solution was quite simple: use double quotes instead of single quotes when providing JSON payload.

The curl call that led to the above message looked like this:

curl^
      -X POST^
      -H "Accept: application/json"^
      -H "Content-Type: application/json"^
      -d '{"Username":"Stefan","Password":"secret"}'^
      http://server/service

After changing the fifth line to -d "{\"Username\":\"Stefan\",\"Password\":\"secret\"}"^ everyhing worked as expected.

How to import SSL certificates into webMethods Integration Server

In this article I described how you can generate a self-signed SSL certificate to enable HTTPS in webMethods Integration Server: How to create a self-signed SSL certificate for webMethods Integration Server with OpenSSL. Now it’s time to import a real certificate.

If you have received the signed certificate from your Certificate Authority, you can follow these steps to import it into Integration Server. I’m using OpenSSL on a Linux machine and Java’s keytool on my Windows workstation for the command line work.

Prepare the certificate

  • The private key has to be in PEM format and needs to be BASE64 encoded. At least in my case OpenSSL wasn’t able to handle it otherwise.
  • First of all, you need to protect your private key with a password, if you haven’t already done so.

    openssl rsa -des3 -in integrationserver.key -out integrationserver.key
    
  • If the certificate is in format DER (in my case it was and the file had the ending cer), it has to be converted to PEM:

    openssl x509 -in integrationserver.cer -inform DER -out integrationserver.crt -outform PEM
    
  • Now the keystore for Integration Server can be created:

    openssl pkcs12 -export -des3 -in integrationserver.crt -inkey integrationserver.key -out integrationserver.p12
    
  • Now we need to create a Truststore containing the issuing certificates of our certificate. You need to download the required certificates for the whole certificate chain and add them to a Truststore:

    keytool -import -alias rootCA -keystore integrationserver.jks -file rootCA.crt
    

    You need to repeat this command for each certificate of the chain with a unique alias.

Import the certificate into Integration Server

  • Create a Truststore Alias under Security -> Keystore -> Create Truststore Alias.
    Create a Truststore Alias in webMethods Integration Server
  • Create a Keystore Alias under Security -> Keystore -> Create Keystore Alias.
    Create a Keystore Alias in webMethods Integration Server
  • Create an HTTPS Port Security -> Ports -> Add Port.
    Create an HTTPS Port in webMethods Integration Server
  • Enable access through the new port.
    Enable access through an HTTPS Port in webMethods Integration Server
  • Test your new HTTPS connection in a browser:
    https://YOUR-SERVERNAME:5443/

Links

How to create a self-signed SSL certificate for webMethods Integration Server with OpenSSL

Here’s a short step-by-step guide on how to create and install a self-signed SSL certificate for testing purposes in webMethods Integration Server. You can test secure HTTPS connections from clients to Integration Server with this certificate.

Create a certificate

You can easily create the certificate using OpenSSL on a Linux system.

  • Create a private key.

    openssl genrsa -des3 -out integrationserver.key 1024
    
    Generating RSA private key, 1024 bit long modulus
    ........................++++++
    .++++++
    e is 65537 (0x10001)
    Enter pass phrase for integrationserver.key:
    Verifying - Enter pass phrase for integrationserver.key:
    
  • Create a certificate signing request (CSR).

    openssl req -new -key integrationserver.key -out integrationserver.csr
    
    Enter pass phrase for integrationserver.key:
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [AU]:DE
    State or Province Name (full name) [Some-State]:Lower-Saxony
    Locality Name (eg, city) []:Vechta
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:Macke IT
    Organizational Unit Name (eg, section) []:
    Common Name (eg, YOUR name) []:localhost
    Email Address []:
    
    Please enter the following 'extra' attributes
    to be sent with your certificate request
    A challenge password []:IntegrationServer
    An optional company name []:
    
  • Sign the CSR yourself and create a certificate.

    openssl x509 -req -days 365 -in integrationserver.csr -signkey integrationserver.key -out integrationserver.crt
    
    Signature ok
    subject=/C=DE/ST=Lower-Saxony/L=Vechta/O=Macke IT/CN=localhost
    Getting Private key
    Enter pass phrase for integrationserver.key:
    
  • Convert the certificate to DER (which Integration Server needs).

    openssl x509 -in integrationserver.crt -inform PEM -out integrationserver_der.crt -outform DER
    
  • Create a keystore containing the private key and the certificate in format PKCS12 (which Integration Server needs).

    openssl pkcs12 -export -des3 -in integrationserver.crt -inkey integrationserver.key -out integrationserver.pkcs12
    
    Enter pass phrase for integrationserver.key:
    Enter Export Password:
    Verifying - Enter Export Password:
    
  • Take a look at all the generated files and copy them over to a directory where IS can access them.

    ls -la
    
    -rw-r--r--  1 root root  818 10. Jan 18:32 integrationserver.crt
    -rw-r--r--  1 root root  680 10. Jan 18:32 integrationserver.csr
    -rw-r--r--  1 root root  563 10. Jan 18:34 integrationserver_der.crt
    -rw-r--r--  1 root root  963 10. Jan 18:29 integrationserver.key
    -rw-r--r--  1 root root 1581 10. Jan 18:34 integrationserver.pkcs12
    

Install the certificate in Integration Server

  • Install the keystore via Security -> Keystore -> Create Keystore Alias on IS’s web frontend.
    Add a Keystore Alias in webMethods Integration Server
    Add a Keystore Alias in webMethods Integration Server
    The new Keystore should now be listed.
    List the Keystore Aliases in webMethods Integration Server
  • Install the certificate via Security -> Certificates -> Edit Certificates Settings.
    Add a certificate in webMethods Integration Server

Add an HTTPS Port in Integration Server

  • Security -> Ports -> Add Port
    Add an HTTPS Port in webMethods Integration Server
    Add an HTTPS Port in webMethods Integration Server
  • You may need to configure the Access Mode of the new port, so that folders and services will be available via HTTPS. Simply click on the link in column Access Mode and configure the settings (Security -> Ports -> Edit Access Mode).
  • Test the HTTPS connection by navigating to https://localhost:5443. The certificate error is ok, because we self-signed our certificate. Add the certificate to the list of trusted certificates and move on. If you use a “real” certificate later, the error will go away.
    Certificate error in webMethods Integration Server

Recommended reading

Links