Continuous Deployment with Natural – Legacy Coder Podcast #6

Continuous Deployment with Software AG's Adabas/Natural

After you have automated the build process for your application based on Software AG’s Adabas and Natural it’s time to take the next step and also deploy the changes to production after each push to Git! I’ll tell you how in the sixth episode of the Legacy Coder Podcast.

Recap: Automating your build process with Continuous Integration

  • Listen to Episode 2 of the Legacy Coder Podcast for an introduction on why you should automate everything from compilation to deployment.
  • What is Continuous Integration?
    • CI is a software development practice. Every developer integrates (i.e. merges) at least daily. Each integration is built and tested automatically.
    • This leads to less integration problems and faster deployment cycles.
  • What can go wrong when manually deploying to production (or the next stage)?
    • Finding all the modules that you need to deploy.
    • NAT0082 Invalid command, or Subprogram does not exist
    • NAT0936 Format/length conflict in parameter
    • NAT0933 GDA time-stamp conflict
      The infamous GDA Timestamp Conflict in Software AG's Natural
    • Separating different features in the same modules from each other.
      Manually merging features in changed Natural modules with WinMerge
    • Nested functions or other dependent modules can’t be compiled.
    • Compile errors nobody cares about (anymore).

Continuous Deployment with Blue/Green Deployments

  • Basic Definitions
    • What is Continuous Delivery?
      • CD builds on top of CI and makes sure that the software can be released at any time. The deployment process has to be fully automated. You just need to push a button to deploy your software to the target environment.
    • What is Continuous Deployment?
      • CD goes even one step further and automatically deploys the software to production if all steps of the deployment pipeline pass. Finally, no more “release days”!
    • What is a Deployment Pipeline?
      • The deployment pipeline defines all the steps (called “stages”) necessary to build and deploy a new version of the software. Usually, there are stages for compiling, testing, packaging and deploying the application.
        Deployment Pipeline for Natural
    • What is Pipeline as Code?
      • Instead of manually creating the deployment pipeline in a CI server like Jenkins you can define all the necessary steps in a domain specific language that the CI server can execute. All the information needed to build and deploy the application are now contained inside the repository.
    • What are Blue/Green Deployments?
      • To be able to deploy to production while users and batch jobs are still using the environment you need another target environment for the deployment process. If everything works as expected you can simply switch to the new environment.
        Blue/Green Deployments for Natural with different FUsers
  • Advantages of a fully automated deployment process with blue/green deployments
    • No more missing files, GDA timestamp conflicts, or format/length conflicts.
    • Fast feedback (“Fail Fast”) for the developers, problems are visible immediately.
    • You can reliably and easily roll back to a previous state if something breaks.
    • Interactive tests in the production environment are possible.
    • Running processes and interactive session won’t be disturbed.
    • Deployments don’t need to be done in the evenings or on weekends.
    • No more annoying tasks for the (expensive) software developers.

Continuous Deployment for Software AG’s Natural

  • These are the tools we use for our CI/CD process (most are Open Source).
    • Git for version control.
    • Gradle as a wrapper for SAG’s Ant script.
    • Jenkins as a CI server.
    • A custom Java program for compiling Natural on the target environment.
    • NatUnit for unit testing.
    • NaturalONE with additional plugins for NatUnit and staging.
    • Redmine for documentation and communication with subject matter experts.
    • Our former apprentice Jonas built a build lamp with a Raspberry Pi that shows the current state of our Natural build.
      Build lamp for Natural above the coffee machine
  • Implementation at AO
    • AO doesn’t have a single pipeline but 3 different ones for each individual branch: DEV, QA, and PROD. The core stages are identical, but they contain individual steps, e.g. for testing and deployment.
      • DEV: only one target FUser that’s overwritten every time, QA: Blue/Green Deployment with instant switch, PROD: Blue/Green Deployment with nightly switch so the users aren’t disturbed.
        Continuous Delivery Pipelines for Software AG's Natural at ALTE OLDENBURGER
    • Before the pipeline can do its job, the developer needs to decide which modules to stage. This is done with Redmine where each changed module is assigned to an issue.
      • An Eclipse plugin developed by ALTE OLDENBURGER searches for all modules assigned to a given issue number and stages them. After a push to Git the pipeline starts.
        Eclipse Workflow Plugin for staging Natural modules with the help of Redmine
    • The first stage of the pipeline checks out the latest version of the Natural sources from Git.
    • After that the target Fuser gets erased and recreated from scratch.
    • We use SAG’s Ant script to upload the sources to the target server.
    • We do a full build every time, not an incremental build.
      • Compile errors are formatted as JUnit results so they can be displayed inside Jenkins.
      • We don’t use SAG’s script for that because it doesn’t follow the StepLib chain (for whatever reason).
    • After the compilation additional test data for testing legacy modules that rely on a certain database state are imported into Adabas.
    • Now the unit and integration tests are executed.
    • If the tests pass, the application is deployed into the next “free” FUser.
    • After the deployment the current FUser may be set to the new one. But this depends on the environment.
    • After the release is done there are a few additional steps required, e.g. restarting the Natural RPC servers against the new FUser.
    • After each production release a tag in Git is created.
    • And finally a changelog is generated from these Git tags.

How to start with Continuous Deployment for Natural

  • First of all: use Git (or any other version control system, but I would prefer Git). Make your current production sources the new master branch and add development sources on top of that into a new branch.
  • Switch to NaturalONE. Not only do you get a modern IDE with features like Code Completion, but more importantly you get the possibility to use version control from within NaturalONE. Your repository is now the source of truth (and not the server).
  • Make your codebase compile! Don‘t allow any compilation error whatsoever!
  • Add an additional FUser for the automated deployment after each push to Git.
  • Generate the deployment script from within NaturalONE and put it into a Jenkins build. You don‘t need to create a Jenkinsfile yet. Start small!
  • Start writing automated tests, e.g. with NatUnit. By the way: It’s open source! Feel free to use and modify it. https://sourceforge.net/projects/natunit/

Recommended reading

Links

How to deploy to JBoss EAP 7 with Gradle, Cargo, and Jenkins

It took me quite a while to get my Java EE 7 application automatically deployed to a target JBoss EAP 7 server from within Jenkins using Gradle as the build tool and Cargo for managing the deployment. So here’s my final solution for you to use! 😉

build.gradle

dependencies {
    classpath 'com.bmuschko:gradle-cargo-plugin:2.2.3'
}

apply plugin: 'com.bmuschko.cargo'

dependencies {
    cargo "org.codehaus.cargo:cargo-core-uberjar:1.5.0",
          "org.codehaus.cargo:cargo-ant:1.5.0",
          "org.wildfly:wildfly-controller-client:8.2.1.Final"
}

cargo {
    containerId = 'wildfly10x'

    deployable {
        context = 'MyContext'
    }

    remote {
        hostname = '10.1.1.1'
        username = 'remote'
        password = 'remote'

        containerProperties {
            property 'cargo.jboss.management-http.port', 9990
        }
    }
}

Jenkinsfile

stage("Deployment") {
    bat('gradlew cargoRedeployRemote --stacktrace')
}

Deploying to JBoss EAP 7 is the same as deploying to Wildfly 10

First of all, there’s no Cargo containerId for JBoss EAP 7. However, you can use Wildfly 10 instead, as you can read here: Codehaus Cargo – WildFly 10.x:

The WildFly 10.x container can be used with the JBoss Enterprise Application Platform (EAP) version 7; i.e. the version released in May 2016

Finding the right versions of Cargo and its Gradle plugin

You need to use the right versions of Cargo and the Cargo Gradle plugin. I’ve found that version 2.2.3 of the Gradle plugin and version 1.5.0 of Cargo itself work fine with Wildfly 10/JBoss EAP 7 (see Execution failed for task ‘:cargoRunLocal’. #152). The latest versions as of this writing of Cargo (1.6.6) and the plugin (2.3) also work in my environment.

If the versions don’t work correctly, you might get an error like this:

> gradle cargoRedeployRemote

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':cargoRedeployRemote'.
> org.codehaus.cargo.container.ContainerException: Cannot create configuration. 
There's no registered configuration for the parameters (container [id = [wildfly10x],
type = [remote]], configuration type [runtime]). Actually there are no valid types
registered for this configuration. Maybe you've made a mistake spelling it?

Deploying to JBoss EAP 7 with the Wildfly Controller Client

Cargo needs a controller client to be able to deploy artifacts to a remote Wildfly 10/JBoss EAP 7 as you can read here: Codehaus Cargo – JBoss Remote Deployer. I’ve found that version 8.2.1.Final of the Wildfly Controller Client org.wildfly:wildfly-controller-client works fine. However, the latest version of org.wildfly.core:wildfly-controller-client (3.0.10.Final) also works.

You need to add it to cargo.dependencies in your build script as shown above. Otherwise you might end up with an error message like this:

> gradle cargoRedeployRemote

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':cargoRedeployRemote'.
> org.codehaus.cargo.container.ContainerException: Failed to create deployer
with implementation class org.codehaus.cargo.container.wildfly.WildFly10xRemoteDeployer
for the parameters (container [id = [wildfly10x]], deployer type [remote]).

Changing the JBoss management port

In my case, the target JBoss server uses a different port for remote management. The default is 9990, but I use 19990. Simply adding cargo.port = 19990 to the build file didn’t cut it:

> org.codehaus.cargo.util.CargoException: HTTP request failed, response code: -1,
response message: java.net.ConnectException: Connection refused: connect, response body: null

And by adding --info to the call to gradle I got:

Starting action 'redeploy' for remote container 'wildfly10x' on 'http://localhost:9990'

It took me a while to find the correct way of telling Cargo to use the custom port. The Cargo documentation (see Codehaus Cargo – JBoss Remote Deployer) states:

WildFly 8.x, 9.x and 10.x use the cargo.jboss.management-http.port port

However, setting this property isn’t as easy as adding cargo.jboss.management-http.port = 19990 to your build file, because this results in:

(cargo.jboss.management - http.port) is a binary expression, but it should be a variable expression

And adding the following lines…

cargo {
    jboss {
        management-http {
            port = 19990
        }
    }
}

…leads to a different error:

> Could not find method jboss() for arguments [cargo_61gwz9gjyqje40dvlr47klkas$_run_closure3$_closure6@22ff8f9]
on object of type com.bmuschko.gradle.cargo.convention.CargoPluginExtension.

Finally, I’ve found the right way of setting the property in this article: Local redeployment #123

containerProperties {
    property 'cargo.jboss.management-http.port', 19990
}

However, if you use the newest versions of Cargo and the plugin cargo.port = 19990 seems to work again.

Example build.gradle using the latest versions of Cargo and the plugin

dependencies {
    classpath 'com.bmuschko:gradle-cargo-plugin:2.3'
}

apply plugin: 'com.bmuschko.cargo'

dependencies {
    cargo "org.codehaus.cargo:cargo-core-uberjar:1.5.0",
          "org.codehaus.cargo:cargo-ant:1.5.0",
          "org.wildfly.core:wildfly-controller-client:3.0.10.Final"
}

cargo {
    containerId = 'wildfly10x'
    port = 9990

    deployable {
        context = 'MyContext'
    }

    remote {
        hostname = '10.1.1.1'
        username = 'remote'
        password = 'remote'
    }
}