SOAP is dead – Lessons Learned from SOA-fying a Monolith

I’ll continue my series of blog posts regarding the lessons we learned while SOA-fying our monolithic Adabas/Natural application with a more technical lesson:

SOAP is dead.

SOAP is dead

This may be a harsh statement, taking into account that we started out with Webservices based on SOAP and at the moment our whole infrastructure is based on it. However, while we were still searching for the right solution to the communication problem two major programming languages stopped supporting SOAP out of the box: Groovy and Ruby. And we used both of them.

If you take a closer look at the current notions in architecture and distributed systems, you’ll find plenty of conference talks and frameworks for providing and consuming REST services. However, SOAP seems to be legacy technology already.

To be honest, we don’t have a single REST service in production, yet. But after working with SOAP Webservices for quite some time now, I can definitely see the advantages of a more loosely coupled approach like REST. The interface design for SOAP services can take quite a lot of time, because you have to define individual operations and data types. And changing the interface – e.g. adding a parameter – leads to a required re-build of all systems using it. With REST you only need simple CRUD operations and can use JSON as a very flexible data format that can be added to later on.

I’m glad that Integration Server provides an easy means to publish REST services. You can even put them on top of existing services and simply provide a more flexible interface to the consumers. This is definitely a solution I want to try out in the near future.

So, if you start with an SOA today, take a closer look at REST and don’t blindly use the default implementation SOAP as it may not be supported any more in the near future.

Test everything! – Lessons Learned from SOA-fying a Monolith

Another lesson we learned while making our legacy application ready for a service-oriented architecture, is this:

Test everything.

Test everything

When I started out writing Flow services in webMethods Integration Server (IS), there was no (nice) way of automatically testing them. Although we were told multiple times by consultants, that there would be a test framework for IS, we never got the actual code. So, I had to develop a test framework myself, simply to be sure that everything still worked as before after a deployment of IS.

The result of my development effort are two small Java frameworks:

I wanted to be able to test IS services (Flow, Java, etc.) with the established tool we already used in our projects: JUnit. However, Integration Server’s Java interface relies on IData, the internal representation of IS’s pipeline. And working with it can get pretty annoying, because it’s nothing more than a big hash table with its own API. So you would have to deconstruct your Java objects into the structure of IData every time you call a service and compose them back together when you get the results, only to be able to call an assertion on it.

ao-idata-converter

My solution for this problem is a project called ao-idata-converter. It takes any Plain Old Java Object (POJO) and converts it to IData with a bit of reflection magic. You can even use beans with Getters and Setters and map attribute names to different fields in the pipeline. So, with ao-idata-converter you can get rid of all the converting from and to IData in your code.

IData convertedObject =
    new ObjectConverter().convertToIData("address", addressObject);

ao-integrationserver

The next problem I faced, was the need to use lots of boilerplate code to be able to call an IS service. If you generate a Java client for a given IS service, the resulting class contains all the setup, authentication, input etc. you need to call the service. However, it’s not reusable and you’ll end up with lots of duplication if you want to call multiple services (which would be the default behaviour, I guess).

Therefore, I abstracted all the needed boilerplate code into a separate framework, ao-integrationserver, that provides an easy API for calling IS services on different endpoints with authentication and input/output parameters represented by simple POJOs. If you follow a certain naming convention for your Java packages and classes, you’ll be able to call an IS service by creating a single class with only a few lines of code. So, adding a new service to your Java library takes only a few minutes at most.

public class max extends Service<max.Input, max.Output>
{
    public static class Input extends ServiceInput
    {
        public String[] numList;
    }

    public static class Output extends ServiceOutput
    {
        public String maxValue;
    }
}

Our test suite

Below you can see a screenshot of our IS test suite in Eclipse. The test suite automatically runs on our Jenkins build server every time we deploy IS and we can point it to any stage (development, test, production) within a matter of seconds to make sure that all services still work as expected.

Screenshot of our test suite for Integration Server

If you would like to know more about the two frameworks or our deployment pipeline, feel free to contact me any time. If you would like to participate in the development of the frameworks, I would also love to hear from you (e.g. via a Pull Request).

Make use of Diversity – Lessons Learned from SOA-fying a Monolith

One of the lessons we learned while SOA-fying our legacy application (an Adabas/Natural monolith, that is almost 20 years old) is:

Make use of diversity.

Make use of diversity!

When we designed the central interface for our service modules, we made sure that developers with different backgrounds worked together on the design team. For example, we had older and younger developers work together with Natural and Java developers. They all worked towards the same goal: creating an interface that all of our systems could use to communicate with each other.

We put together ideas from the “old” world – procedural data processing – and ideas from the “new” world – object and service orientation – and merged them into a flexible interface that combines the best concepts of both worlds. The more experienced developers brought their domain knowledge to the table and the younger developers added their new ways of thinking about the technical problems.

What we ended up with was an interface that could easily be provided and consumed from all the platforms we use. Because we put the interface to the test right away and made sure that problems would become visible quite quickly, we only needed a few iterations before the final design was ready.

And every developer that was involved in the process knew the interface already and could actively promote its usage to the rest of the team. And each of those deveopers could use the language they were already familiar with and didn’t have to teach new concepts – which isn’t that easy for both Natural and Java developers by the way.

So, my advice to you, if you plan to design a central interface, data model, or business process, is: Make sure to integrate as many different views and backgrounds as possible. The coordination might take a bit longer (at first), but you’ll probably end up with a much better solution.

If you make use of the individual strengths of your developers, the end result will be a well rounded solution.

SOA-fying a Monolith – Innovation World 2015

My talk for Software AG’s Innovation World 2015 in Las Vegas got accepted and is already visible on the agenda: Lessons Learned from SOA-fying a Monolithic Legacy Application.

Logo Innovation World 2015

How do you modernize a monolithic legacy application to meet the requirements of today’s service-oriented world? In this talk, Stefan Macke shares his insights from SOA-fying a 15-year-old Adabas & Natural insurance application with the help of webMethods Integration Server, a bunch of unit tests and a domain-specific language for creating a canonical data model. He presents technical, architectural as well as organizational lessons learned from the modernization project.

How to test JMS processing in webMethods/Terracotta Universal Messaging and Integration Server with SoapUI and HermesJMS

Universal Messaging is the default messaging component used by webMethods Integration Server. Here is a short tutorial how you can test JMS processing using SoapUI and HermesJMS.

  • SOAPUIDIR points to the installation directory of SoapUI, e.g. C:\Program Files\SmartBear\SoapUI-5.1.3.
  • NIRVANADIR points to the installation directory of Nirvana, e.g. C:\SoftwareAG\nirvana.

Setup Universal Messaging

  • First of all you need to create the needed artifacts in your Universal Messaging realm. Start with the JNDI Provider URL and click Apply.
  • Then add a Connection Factory, Topic Connection Factory, and Topic: Create artifacts in Universal Messaging

Setup Integration Server

  • Create a JNDI Provider Alias for Universal Messaging under Settings > Messaging > JNDI Settings: Create JNDI Alias in Integration Server
  • You can now test the alias and should see the artifacts you created in Universal Messaging: Test JNDI Alias in Integration Server
  • Create a JMS Connection Alias for the JNDI Alias under Settings > Messaging > JMS Settings. Use the corresponding values from Universal Messaging for JNDI Provider Alias Name and Connection Factory Lookup Name: Create JMS Connection Alias in Integration Server
  • Enable the Connection Alias: Test JMS Connection Alias in Integration Server

Setup SoapUI/HermesJMS

  • Copy the following JARs to SOAPUIDIR\hermesJMS\lib: NIRVANADIR\lib\jndi.jar, NIRVANADIR\lib\nClient.jar, NIRVANADIR\lib\nJ2EE.jar, NIRVANADIR\lib\nJMS.jar.
  • Create a new session named IS and add all above JARs to a new classpath group named IS: Add provider JARs for Universal Messaging to HermesJMS
    Click Apply and restart HermesJMS.
  • You should now be able to select IS under Loader and hermes.JNDIConnectionFactory under Class: Configure session in HermesJMS
    Add the properties host, port, initialContextFactory, providerURL, and binding under Connection Factory and configure them according to your environment. You can find the needed values in the JNDI settings of Universal Messaging: JNDI properties in HermesJMS
  • You should now be able to discover queues and topics for the session: Discover queues and topics with HermesJMS
  • To test the subscription to a topic, you can now browse the topic: Browse topic with HermesJMS
  • If you send a test message with Software AG Designer, you should see the message in HermesJMS: Send a JMS test message with Integration Server

    Browse the JMS messages in HermesJMS

Possible errors

  • hermes.HermesException: The binding property to locate the ConnectionFactory in the Context is not set: hermes.HermesException: The binding property to locate the ConnectionFactory in the Context is not set
    Add the property binding under Connection Factory in the session preferences (right-click on the session and Edit).

Additional Links

How to create a new webMethods Integration Server instance

In newer versions of the webMethods suite, you can install multiple instances of Integration Server into a single installation. Some central packages will be re-used in every instance. However, administration of the instances may be a bit harder, as these packages have to be updated manually, e.g. in case of a version update or fix installation.

Here is how to setup a new Integration Server instance (ISDIR points to the Integration Server directory, e.g. C:\SoftwareAG\IntegrationServer):

  1. Run ISDIR\instances\is_instance.bat create -Dinstance.name=testIS1 -Dprimary.port=5550 -Ddiagnostic.port=9990 -Djmx.port=8077 -Dlicense.file=C:\license.xml
    This will start an Ant build that creates the new IS instance under instances\testIS1. Of course, you may need to adjust the parameters according to your needs.
  2. If the build finishes successfully, the instance can be started with ISDIR\instances\testIS1\bin\startup.bat. You should now be able to connect to localhost:5550 and see the administration page for your new instance:
    Administration page for the newly created Integration Server instance
  3. If you want to install the new instance as a Windows service, you can run ISDIR\instances\testIS1\support\win32\installSvc.bat:
    Create a new service for the newly created Integration Server instance
    You should now see another IS service:
    The service for the the newly created Integration Server instance
  4. You should also see the new instance in Command Central. A refresh or a restart of the platform manager may be needed.
    The newly created Integration Server instance in Command Central

Return code 82 when running ftouch for a Natural FUser

Today we had a problem with one of our Natural FUsers. When trying to add new sources with ftouch, we got the following error message:

user@server ~ $ ftouch fuser=22,173 lib=ACC sm -b -d


        FTOUCH UTILITY V 6.3.13 PL 0   Software AG 2012

Error  : Mass update could not be started.
          Return code 82 received.

As the return code didn’t help with finding a solution, I kicked off strace and followed the output until the error message was shown:

strace -f -v -s 2014 -o /tmp/stracelog.txt ftouch fuser=22,173 lib=ACC sm -b -d
  • -f: Trace child processes as they are created by currently traced processes as a result of the fork(2) system call.
  • -v: Print unabbreviated versions of environment, stat, termios, etc. calls.
  • -s strsize: Specify the maximum string size to print (the default is 32).
  • -o filename: Write the trace output to the file filename rather than to stderr.

Here comes the interesting part:

stat("/home/macke/fuser", {st_dev=makedev(253, 2), st_ino=2007056, st_mode=S_IFDIR|S_ISGID|0775, st_nlink=4, st_uid=1000, st_gid=1000, st_blksize=4096, st_blocks=8, st_size=4096, st_atime=2015/06/02-12:14:40, st_mtime=2015/06/02-12:14:33, st_ctime=2015/06/02-12:14:39}) = 0
open("/tmp/NCFD00b30016.LCK", O_RDONLY) = 3
read(3, "B24B\0\0\0\0\1\0\0\0FD00b30016\0\0006\200\34\0\0\0\0\0", 32) = 32
close(3)                          = 0
semctl(1867830, 0, GETVAL, 0)     = 0
semctl(1867830, 1, GETVAL, 0)     = 9999
unlink("/home/macke/fuser/ACC/FILEDIR.SAG") = -1 ENOENT (No such file or directory)
semop(1867830, 0x7ffdbcb66ab0, 1) = -1 EACCES (Permission denied)
write(1, "Error  : Mass update could not be started.\n", 43) = 43
write(1, "          Return code 82 received.\n", 35) = 35

Apparently, after opening some kind of temporary file under /tmp, a system call to semop couldn’t be executed (see EACCES (Permission denied)).

Without searching for the cause any longer, I simply deleted all the temporary files under /tmp/NCFD* (who cares for temporary files, anyway?) and ftouch ran successfully immediately:

user@server ~ $ ftouch fuser=22,173 lib=ACC sm -b -d


        FTOUCH UTILITY V 6.3.13 PL 0   Software AG 2012

Ftouch request executed with success.

How to determine a Natural module’s caller

I wanted to find out, from which module another Natural module was called. My goal was to make sure, that the module can only be called from a certain other module and raises an error, if a “disallowed” module calls it. I don’t want to get into the details here of why this is a bad idea in the first place 😉

In Ruby, this is a one liner (see Any way to determine which object called a method?):

caller.first

As it turns out, in Natural it’s not that simple. However, it’s not that hard, either. Thanks to a forum post (see Previous Program System Variable) I was able to quickly implement a short subroutine that does the job. It uses User Exit USR0600N (Get program level information) and looks like this:

DEFINE DATA
*
PARAMETER
01 P-CALLER (A8)
*
LOCAL
01 #NAMES (A8/1:32)
01 #LEVEL (P3/1:32)
*
01 #I (I4)
01 #STACK-SIZE (I4)
01 #INDEX-CALLER (I4)
*
END-DEFINE
*
DEFINE SUBROUTINE GET-CALLER
*
RESET P-CALLER
*
CALLNAT 'USR0600N' #NAMES(*) #LEVEL(*)
*
FOR #I 1 *OCC(#NAMES)
  IF #NAMES(#I) NE ' '
    #STACK-SIZE := #I
  END-IF
END-FOR
*
#INDEX-CALLER := 3
IF #STACK-SIZE GE 3
  P-CALLER := #NAMES(#INDEX-CALLER)
END-IF
*
END-SUBROUTINE
*
END

It can be called like this:

PERFORM GET-CALLER #CALLER

USR0600N returns the Natural modules currently on the stack in descending order (as you would expect from a stack). So if STACK calls STACK2 and STACK2 calls STACK3 and STACK3 calls GET-CALLER, USR0600N returns:

GET-CALLER (index 1; in fact, this would be the module's name, e.g. "GETCALL")
STACK3 (index 2)
STACK2 (index 3)
STACK (index 4)

This should explain the logic in GET-CALLER above. For the call chain above, WRITE *PROGRAM 'was called by <' #CALLER '>' results in:

STACK3 was called by <SMSTACK2>
STACK2 was called by <SMSTACK >
STACK  was called by <        >

Performance of array redimensioning in Natural

As I found out totay, the performance of redimensioning an array in Natural largely depends on the statement you use. I compared RESIZEand EXPAND and found out, that RESIZE is more than two times slower than EXPAND. With bigger arrays, RESIZE may even be up to 20 times more slowly than EXPAND!

Unfortunately, the documentation for the two statements is almost identical (see RESIZE and EXPAND). So there is no hint on why the performance is so drastically different.

Example program:

DEFINE DATA
*
LOCAL
01 #I (N8)
01 #ARR (A8/1:*)
01 #START (T)
01 #END (T)
01 #TIME (T)
01 #N (N8)
END-DEFINE
*
#N := 100000
*
#START := *TIMN
*
REDUCE ARRAY #ARR TO 0
FOR #I 1 #N
  RESIZE ARRAY #ARR TO (1:#I)
END-FOR
*
#END := *TIMN
#TIME := #END - #START
WRITE 'RESIZE' #TIME
*
#START := *TIMN
*
REDUCE ARRAY #ARR TO 0
FOR #I 1 #N
  EXPAND ARRAY #ARR TO (1:#I)
END-FOR
*
#END := *TIMN
#TIME := #END - #START
WRITE 'EXPAND' #TIME
*
END

Result:

RESIZE 00:00:11
EXPAND 00:00:04

If I use a more realistic array (that resembles a real database row), the result is even more obvious:

01 #ARR (1:*)
  02 #A1 (A8)
  02 #A2 (N8)
  02 #A3 (A) DYNAMIC
  02 #A4 (L)
  02 #A5 (N12,7)
  02 #A6 (A100)
  02 #A7 (A1000)

Result (after only 10,000 iterations):

RESIZE 00:01:04
EXPAND 00:00:18

And another result (after 20,000 iterations):

RESIZE 01:25:02
EXPAND 00:03:23

Should I use Microservices in new projects?

Sam Newman answers this question in his recent post Microservices For Greenfield?:

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

I think I would agree with him. Microservices have been quite popular for a while now, but “plain old” enterprises – excluding hip new startups or big web companies like Netflix – first of all need a shift in the developers’ and organization’s mindset. And until then they will be better off with their “good old” architecture – the monolith.

Over time, Microservices may become the default way to compose their infrastructure, but at the moment the organizational overhead for managing the different teams may scare the companies away from the concept. However, I would love to be on one of the first teams to try it out 🙂