Category Archives: Java

How to disable SSL TLS protocols in Springboot?


Often a requirement comes to secure the application as well as the connections made to that application.

Prior to TLS 1.2, many versions of SSL and TLS came into existence to enforce transport layer security. Those previous versions were vulnerable to some sort of attacks\threats and those were fixed in their next version.

In order to enforce security, you may just want to accept connections over TLS v1.2 and thus only enable TLSv1.2 while disabling all other versions- SSLv3, TLS 1.0, TLS 1.1 etc

The purpose of this article is to list down the steps required to enable only TLS 1.2 and disable all other versions in a Springboot Application.

PRE-REQUISITES

  • JRE
  • IDE of your choice
  • Springboot Application
  • Certificates – be it Self Signed or from Public CA

This article assumes that your application has already enabled SSL  in your application and configured certificates and secure HTTP Connectors either programmatically or through configuration.

HOW IT WORKS?

Before we look into the steps, lets first understand how things work. Basically, an application sets up a virtual host/container – Jetty or Tomcat or Undertow etc as well as HTTP Listener(s).

In a Springboot application, embedded containers can be setup using

EmbeddedServletContainerFactory

during bootstrapping.

For tomcat,

TomcatEmbeddedServletContainerFactory

is initialized and likewise.  These containers set up Connectors (HTTP) and configure connectors for

  • Port
  • URI Encoding
  • SSL Settings optionally
  • Compression optionally
  • Protocol Handler etc

HOW TO DISABLE SSL or  < TLS 1.2 ?

  1. In < Springboot v1.4.x versions

    For Springboot applications with versions < 1.4.x, there is not any support to disable protocols through configuration. APP YAML configuration has few properties to enable SSL but it does not provide a mechanism to set SSL enabled-protocols

    Thus, changes have to be done programmatically.

  But how?

  Do i need to initialize Tomcat Factory and Connector and stitch everything ?

Luckily, not. Springboot allows to customize the existing Container and further customize Connector.

Does that mean i just need to create Customizer and somehow attach it to the existing initialized container?

Yes, that’s right.

Add the below code and Your Problem will be solved. What we are doing is that during Service bootstrapping process, we are injecting a

EmbeddedServletContainerCustomizer

and

TomcatConnectorCustomizer

beans and this way Spring IoC Container will stitch them out for you.


@Bean
    public EmbeddedServletContainerCustomizer containerCustomizer(TomcatConnectorCustomizer connectorCustomizer) {
        return new EmbeddedServletContainerCustomizer() {
            public void customize(ConfigurableEmbeddedServletContainer container) {
                if (container instanceof TomcatEmbeddedServletContainerFactory) {
                    TomcatEmbeddedServletContainerFactory tomcat = (TomcatEmbeddedServletContainerFactory) container;
                    tomcat.addConnectorCustomizers(connectorCustomizer);
                }
            }
        };
    }

    /**
     * Sets up the Tomcat Connector Customizer to enable ONLY TLSv1.2
     * @return Reference to an instance of TomcatConnectorCustomizer
     */
    @Bean
    public TomcatConnectorCustomizer connectorCustomizer() {
        return new TomcatConnectorCustomizer {
        @Override
        public void customize(Connector connector) {
            connector.setAttribute("sslEnabledProtocols", "TLSv1.2");
        }
    }<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>;
    }

    1. In < Springboot v1.4.x versions

      For Springboot applications > 1.4.x, things have been made much simpler and can be done through YAML configuration.

server:
   ssl:
     enabled: true
     key-store: classpath:Keystore.jks
     key-store-password: <storepassword>
     key-password: <password>
     key-alias: <yourKeyAlias>
     enabled-Protocols: [TLSv1.2]
   port: 8443

enabled-Protocols: [TLSv1.2] is the trick here.

Simple. Isn’t it?

Advertisements

My First Lambda – Not Just Hello World


Amazon Web Services aka AWS provides many SaaS products.
In this post, I want to share my learnings and experiences while working on one of the SaaS Products called LAMBDA.

I’ll begin with explaining our use case a bit and then implementing and Deploying a Lambda.

USE CASE

I was working on designing and implementing on a requirement to ticket the Air Bookings. Without Ticketing, user cannot board a flight and thus fly.

MORE ABOUT TICKETING PROCESS

Ticketing is an orchestration of series of steps, some require Biz Logic evaluation and some require interacting with different 3rd Party Services multiple times over the network.

This process can be seen as event driven, can be done asynchronously with retry capabilities, scheduling capabilities, involving interaction with 3rd Party Services over the network.

It has to be completed within time constraints as per Airlines\GDSes otherwise user cannot fly.

After gathering requirements,  it seems to be a usecase of building a BOT, a Ticketing Bot, more specifically and Executor-Schedulor-Supervisor-Agent Pattern fitting very well technically.

WHAT IS “EXECUTOR-SCHEDULOR-SUPRVISOR-AGENT?

It’s a Pattern where in roles and responsibilities are clearly separated out to different actors\components.
Executor, Supervisor, Agent represent different Blocks and each is responsible to perform clearly defined task.

Executor is responsible to execute the Orchestration and likewise for other. You may choose to use Persistent Workflow frameworks, Queues for orchestration execution.

WHERE DOES LAMBDA FIT IN OUR CASE?

Ticketing Process has to be completed for multiple bookings. After all, multiple users are doing bookings on our site.

This demands multiple executors to be running in parallel and executing an orchestration independently with no interference.

Obviously, you will want that each executor picks a different Booking for ticketing.
For this, you will have synchronization and other checks in place so that once booking is owned by any executor, it does not get  executed by another Executor.

Let’s say, we have a strategy that once a booking is picked by an Executor, executor updates a workItem with it’s ownership, timestamp and changes it’s status to In_Progress to reflect that Ticketing Process has been kicked in.

Now think of a scenario where in

  • an executor(s) (Server) performing a ticketing process, crashes in the middle of the process.
  • Server has been put Out of Rotation due to being Unhealthy
  • Or, you want to deploy the incremental changes and that may involve halting\interrupting the currently executing Ticketing Processes.

The 3rd scenario can be dealt with publishing Events to reach to a consistent state and stop further processing.

But, what about other Scenarios ? In that, Ticketing Process(es) will appear to be running with In_Progress status while that’s not the case.

How will you ensure that those Processes get completed later?

We will for sure want to complete the Ticketing Process at any cost.

What if we have something which can detect such Stuck Bookings and reprocess them from the last checkpoint.

Lets just focus on Supervisor.

What is the role of “Supervisor”?

Supervisor is a component made responsible to detect such Stuck Bookings and queue them for further re-processing. Note that it does not start executing those processes, instead it just re-queues them so that an executor an pick it up again.

In our case, Supervisor has to connect to Queues\Data Stores hosted in VPC.
Ok. What are the other expectations from this Supervisor?

  1. It has to be Available. You would not want your Supervisor to be down for a long time. However, you would want that when 
  2. A Single Supervisor can fulfill the need. No need to run the multiple Supervisors at a time.
  3. Supervisor running periodically.
  4. Supervisor running in background
  5. Supervisor has no state attached to it

All the above expectations made LAMBDA a good Fit in our case.

Enough of the story 🙂 Before you start cursing me, let’s start building a Lambda.

LAMBDA

Lambda is a function that can be executed in AWS Cloud environment based on certain trigger policies. Trigger can be a scheduled timed event or S3 event or likewise.

Refer AWS for more information.

BUILDING AND DEPLOYING LAMBDA

Building a Lambda is simple. It requires a function which has to be executed based on a trigger policy. As such, Lambda can be in Java or Python or Node till this time.

Lets Build Lambda in Java.

  1. Create a Class (any name) MyFirstLambda and a function handler (any name), handler as below:
    public class Supervisor {
        public void queueStuckOrdersForReprocessing(Context context) {
            // Implement this function as per tasks need to be accomplished
        }
    
  2. Implement handler function keeping in mind the task you want to accomplish. In our case, we wanted to detect and queue the Stuck Bookings for re-processing.
    public class Supervisor {
        public void queueStuckOrdersForReprocessing(Context context) {
            LambdaLogger logger = context.getLogger();
            logger.log("Supervisor Cycle Started");
    
            // Problem: Time Consuming while actual task is pretty small
            // Problem: How can i Initialize based on environment or profile like Spring Profiles
            QueueingService queueingService = this.initialize();
    
            logger.log("Supervisor Initialized");
    
            // Problem: How can i execute Multiple Tasks in Parallel
            this.buildTask(this.enrichedQueueingService, "Enriched").run();
    
            logger.log("Supervisor Cycle Completed");
        }
    
        private String getProperty(String name) {
            return System.getenv(name);
        }
    
        private QueueingService initialize() {
           return new QueueingService() {
                    public QueueingService() {
                       // Initialize, it could be Time Consuming.
                       // You may be using MongoDB as a Queue and initializing might take some time
                    }
    
                    /**
                         * Moves the products stuck in queue1 for past 'timeInSeconds' seconds to queue2
                         * @param queue1 Current Queue the Product is in
                         * @param timeInSeconds Time in seconds since product is not acted upon
                         * @param queue2 New Queue the Product shall be moved to
                         * @return No of Products got reset
                         */
                        @Override
                        public int move(String queue1, int timeInSeconds, String queue2) {
                            // your implementation here
                        }
            );
        }
    
        // Problem: Logger has to be passed everywhere we want to log
        private Runnable buildTask(QueueingService queueingService, LambdaLogger logger) {
            return new Runnable() {
                @Override
                public void run() {
                    int noOfProducts =
                            queueingService.move(IN_PROGRESS,
                                    Integer.parseInt(getProperty("IN_PROGRESS_AGE")),
                                    REPROCESS);
    
                    logger.log(
                            String.format(
                                    "Supervisor requeued '%s' Products for ReProcessing",
                                    noOfProducts));
                }
            };
        }
    }
             

    The above code works. However, it can be refactored and optimized further.

    Let’s assume that Queues are maintained in a Database, MongoDB (No-Sql).
    Initializing a MongoDB can take a lot of time while the actual task to be performed may not be that TimeConsuming.

    You may ask that Is there a way we can initialize just once and thus be more performant and consume lesser resources?

    Fortunately, there is.

    AWS says that Lambda container can be reused for subsequent invocations.

    Note the words can be. AWS does not guarantee but there is a possibility.

    If that’s the case, to avoid re-initialization, how about maintaining Fields and use them. We can simple maintain QueueingService as a field\state in Supervisor class and use it.

    Below is the refactored code.

    public class Supervisor {
        private boolean isInitialized = false;
        private QueueingService queueingService;
        private LambdaLogger logger;
    
        public void queueStuckOrdersForReprocessing(Context context) {
            logger = context.getLogger();
            logger.log("Supervisor Cycle Started");
    
            // Fields are initialized and thus on reuse, will not be re-initialized
            // Problem: How can i Initialize based on environment or profile like Spring Profiles
            this.initialize();
    
            logger.log("Supervisor Initialized");
    
            // Problem: How can i execute Multiple Tasks in Parallel
            this.buildTask(this.enrichedQueueingService, "Enriched").run();
    
            logger.log("Supervisor Cycle Completed");
        }
    
        private String getProperty(String name) {
            return System.getenv(name);
        }
    
        private void initialize() {
           if (!this.isInitialized) {
           this.queueingService = new QueueingService() {
                    public QueueingService() {
                       // Initialize, it could be Time Consuming.
                       // You may be using MongoDB as a Queue and initializing might take some time
                    }
    
                    /**
                         * Moves the products stuck in queue1 for past 'timeInSeconds' seconds to queue2
                         * @param queue1 Current Queue the Product is in
                         * @param timeInSeconds Time in seconds since product is not acted upon
                         * @param queue2 New Queue the Product shall be moved to
                         * @return No of Products got reset
                         */
                        @Override
                        public int move(String queue1, int timeInSeconds, String queue2) {
                            // your implementation here
                        }
            );
           this.isInitialized = true;
         }
        }
    
        private Runnable buildTask(QueueingService queueingService) {
            return new Runnable() {
                @Override
                public void run() {
                    int noOfProducts =
                            queueingService.move(IN_PROGRESS,
                                    Integer.parseInt(getProperty("IN_PROGRESS_AGE")),
                                    REPROCESS);
    
                    logger.log(
                            String.format(
                                    "Supervisor requeued '%s' Products for ReProcessing",
                                    noOfProducts));
                }
            };
        }
    }
             

    Great. I still have another problem. I want to execute multiple tasks but not Sequentially, in parallel instead.

    AWS does allow creating Threads or ThreadPool(s) as long as CPU, Memory Limits are not crossed. Refer AWS.

    Below code has a simple change to create a ThreadPool of size 1. Just change the size to create more threads.

    public class Supervisor {
        private boolean isInitialized = false;
        private QueueingService queueingService;
        private LambdaLogger logger;
    
        public void queueStuckOrdersForReprocessing(Context context) {
            logger = context.getLogger();
            logger.log("Supervisor Cycle Started");
    
            // Fields are initialized and thus on reuse, will not be re-initialized
            // Problem: How can i Initialize based on environment or profile like Spring Profiles
            this.initialize();
    
            logger.log("Supervisor Initialized");
    
            ExecutorService executor = Executors.newFixedThreadPool(1);
    
            Future enrichedSupervisor = executor.submit(this.buildTask(this.enrichedQueueingService, "Enriched"));
    
            while (!(enrichedSupervisor.isDone() && supervisor.isDone())) {
                    // spin and wait
                    Thread.sleep(1000);
            }
    
            logger.log("Supervisor Cycle Completed");
        }
    
        private String getProperty(String name) {
            return System.getenv(name);
        }
    
        private void initialize() {
           if (!this.isInitialized) {
           this.queueingService = new QueueingService() {
                    public QueueingService() {
                       // Initialize, it could be Time Consuming.
                       // You may be using MongoDB as a Queue and initializing might take some time
                    }
    
                    /**
                         * Moves the products stuck in queue1 for past 'timeInSeconds' seconds to queue2
                         * @param queue1 Current Queue the Product is in
                         * @param timeInSeconds Time in seconds since product is not acted upon
                         * @param queue2 New Queue the Product shall be moved to
                         * @return No of Products got reset
                         */
                        @Override
                        public int move(String queue1, int timeInSeconds, String queue2) {
                            // your implementation here
                        }
            );
           this.isInitialized = true;
         }
        }
    
        private Runnable buildTask(QueueingService queueingService) {
            return new Runnable() {
                @Override
                public void run() {
                    int noOfProducts =
                            queueingService.move(IN_PROGRESS,
                                    Integer.parseInt(getProperty("IN_PROGRESS_AGE")),
                                    REPROCESS);
    
                    logger.log(
                            String.format(
                                    "Supervisor requeued '%s' Products for ReProcessing",
                                    noOfProducts));
                }
            };
        }
    }
    

    Another problem, i have. I have different environments set up and in each environment, i have different settings, say mongodb cluster is different.
    I want to package resource files in jar and load them as per environment rather than configuring each setting as an environment variable.

    How can i initialize based on an environment?

    Once again, AWS comes to a rescue. It provides an ability to specify environment variables while configuration and these environment variables get passed to Lambda Function as Environment Variables on each execution.
    What if we set the Environment and based on it’s value we load the resource file like Spring loads the configuration based on Profile.

    Let’s see how can this be achieved.

    public class Supervisor {
        private static String MONGODB_URI_SETTINGNAME = "mongodb.uri";
        private static String IN_PROGRESS_AGE_SETTINGNAME = "inprogress.ageInSomeTimeUnit";
        private static String ENVIRONMENT_SETTINGNAME = "environment";
    
        private boolean isInitialized = false;
        private String environment;
        private Properties properties;
        private QueueingService queueingService;
        private LambdaLogger logger;
    
        public void queueStuckOrdersForReprocessing(Context context) {
            logger = context.getLogger();
            logger.log("Supervisor Cycle Started");
    
            // Fields are initialized based on environment and thus on reuse, will not be re-initialized
            this.initialize();
    
            logger.log("Supervisor Initialized");
    
            ExecutorService executor = Executors.newFixedThreadPool(1);
    
            Future enrichedSupervisor = executor.submit(this.buildTask(this.enrichedQueueingService, "Enriched"));
    
            while (!(enrichedSupervisor.isDone() && supervisor.isDone())) {
                    // spin and wait
                    Thread.sleep(1000);
            }
    
            logger.log("Supervisor Cycle Completed");
        }
    
        private String getSystemEnv(String name) {
            return System.getenv(name);
        }
    
        // This is to get the profile based properties
        private String getProperty(String name) {
            return this.properties.getProperty(name);
        }
    
        // This does the initialization
        private void initialize() {
           if (!this.isInitialized) {
               this.initializeProps();
               this.queueingService = new QueueingService() {
                    public QueueingService() {
                       // Initialize, it could be Time Consuming.
                       // You may be using MongoDB as a Queue and initializing might take some time
                    }
    
                    /**
                         * Moves the products stuck in queue1 for past 'timeInSeconds' seconds to queue2
                         * @param queue1 Current Queue the Product is in
                         * @param timeInSeconds Time in seconds since product is not acted upon
                         * @param queue2 New Queue the Product shall be moved to
                         * @return No of Products got reset
                         */
                        @Override
                        public int move(String queue1, int timeInSeconds, String queue2) {
                            // your implementation here
                        }
            );
           this.isInitialized = true;
         }
        }
    
        private void initializeProps() throws IOException {
            this.initializeEnvironment();
            if (this.properties == null) {
                final String propFileName = String.format("application-%s.yml", this.environment);
                this.properties = new Properties();
                this.properties.load(Supervisor.class.getClassLoader().getResourceAsStream(propFileName));
            }
        }
    
        private void initializeEnvironment() {
            this.environment = getSystemEnv(ENVIRONMENT_SETTINGNAME);
            if (StringUtils.isBlank(this.environment)) {
                this.environment = "prod";
            }
        }
        private Runnable buildTask(QueueingService queueingService) {
            return new Runnable() {
                @Override
                public void run() {
                    int noOfProducts =
                            queueingService.move(IN_PROGRESS,
                                    Integer.parseInt(getProperty(IN_PROGRESS_AGE_IN_SECONDS_SETTINGNAME)),
                                    REPROCESS);
    
                    logger.log(
                            String.format(
                                    "Supervisor requeued '%s' Products for ReProcessing",
                                    noOfProducts));
                }
            };
        }
    }
    
  3. Package your code into Jar using mvn or tool as per your preferences.

DEPLOYING LAMBDA

  1. Using AWS CMD CLI (Command Line Interface) to upload jar and other required/optional configurations
  2. Through AWS console where in you can provide different configurations

HOW CAN I\WE ACCOMPLISH THIS?

  1. We use different environments like Test environment, Stress etc before releasing to PROD and in each environment, we want to have different settings. How can we pass different settings like we can activate different Profiles in Spring?      [ANSWER]: AWS allows to configure and pass environment variables to a Lambda on execution. While configuring a Lambda Function, define what environment variables need to be passed to your Lambda and then based on those environment variables, do things.
  2. Our Lambda needs to connect to components\services deployed in our VPC. On execution, Lambda function is not able to connect to that component.        [ANSWER]: AWS considers and enforces Security . To allow connections, configure Lambda with proper SubnetIds of your VPCs and permissions.
  3. Our Lambda is not Event driven. It’s based on files wriiten in S3. How can we pass event data to Lambda?
    [ANSWER]: This blog focussed on Lambda with no event data, however AWS supports different events. Refer AWS. In order to pass Event Data to Lambda Function, handler function can accept more parameters. Parameter can even be of Custom Type and AWS takes care of Serialization and De-serialization.

THINGS TO KEEP IN MIND

  1. AWS puts restrictions on executing Lambda – be it a size of the jar or constraints on resources like cpu, memory etc. Always check restrictions on AWS Site before considering Lambda.
  2. Make sure that you understand the billing. Lambda is billed based on resources usage and the total time for execution.

FEW MORE TIPS

  • Give your Lambda a Good Name
  • Tag Your Lambda for proper identification and enforcing security policies
  • Do not package redundant dependencies. It can make your package heavy and may not be even fit to be run as Lambda.
  • Have CloudWatch Metrics’ based Alarms in place
  • Ensure that you do not over-configure your Lambda with all SubnetIds of your VPC.
  • When deploying your Lambda in VPC, Scaling has to be thought of properly
  • Have proper Logging for debugging and tracing purposes. Logs are available in CloudWatch as well

Pass Custom DateTime Zone in SQL Query Date Time Parameter | Hibernate


Using Hibernate and Struggling with querying DateTime Column in RDBMS (like MS-SQL) in specific timezone?
No matter what Timezone your DateTime object has, while issuing hibernate query,
do you observe that Time in Default Timezone of JVM is always getting passed and thus not giving you desired results?

If that’s the case, this article describes a process to achieve querying DateTime column with specific timezone.

WHY THIS HAPPENS?

It is because your Application Server and Database Server are running in Different TimeZones.

If your Application Server and Database Server are running in different TimeZones, we need to ensure that the Date Time query parameter values shall be sent as per DB Timezone to get desired results.

Let’s understand how does Hibernate\DB Driver forms a Sql Query in the next section.

HOW HIBERNATE CREATES A QUERY?

On an Application Server, DB Driver forms a Command before sending it to RDBMS. Database System then executes the query (may compile if needed) and return the results accordingly.

DB Driver instantiates a Command in the form of PreparedStatement object. Then, DBConnection is attached with the above Command Object on which this command will be executed. Since we want to query by certain parameters, DateTime in our case, DB Driver sets the query parameters on the command. 

PreparedStatement exposes few APIs to set different parameters depending upon the type of the parameter.
To pass DateTime information, various APIS being exposed are:

  • setDate
  • setTime
  • setTimestamp

All these functions allow passing Calendar object to be passed. Using this Calendar object, Driver constructs the SQL DateTime value.

If this Calendar object is not passed, Driver then uses the DEFAULT TIMEZONE of the JVM running the application. This is where things go wrong and desired results are not obtained.

How can we solve it then?

DIFFERENT APPROACHES

  1. Setting same timezone of the Application Server and of DB Server
  2. Setting timezone of the JVM as that of DB Server
  3. By extending the TimestampTypeDescriptor and AbstractSingleColumnStandardBasicType classes and attaching to the Driver

1st and 2nd Approaches are fine, however these can have side-effects.

1st can impact other applications which are running on the same system. Usually, 1 application runs on a single server in Production or LIVE environment, however, with this we are delimiting the deployment of other applications.

2nd approach is better than 1st one since it will not impact other applications, however, the caveat here is what if your application is talking to different DB Systems which are in different timezones. Or, what if you want to set TimeZone on only few selected Time Fields.

3rd approach is flexible. It allows you to represent different time fields in even different time zones.

AlRight. Can we have steps then to implement Approach #3

STEPS FOR 3rd Approach:

Provide Custom TimestampTypeDescriptor and AbstractSingleColumnStandardBasicType
  • Implement Descriptor class as given below:
    import java.sql.CallableStatement;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Timestamp;
    import java.util.Calendar;
    import java.util.TimeZone;
    
    import org.hibernate.type.descriptor.ValueBinder;
    import org.hibernate.type.descriptor.ValueExtractor;
    import org.hibernate.type.descriptor.WrapperOptions;
    import org.hibernate.type.descriptor.java.JavaTypeDescriptor;
    import org.hibernate.type.descriptor.sql.BasicBinder;
    import org.hibernate.type.descriptor.sql.BasicExtractor;
    import org.hibernate.type.descriptor.sql.TimestampTypeDescriptor;
    
    /**
     * Descriptor for {@link Types#TIMESTAMP TIMESTAMP} handling with zone.
     */
    public class CustomZonedTimestampDescriptor extends TimestampTypeDescriptor {
        public static final CustomZonedTimestampDescriptor PST_INSTANCE = new CustomZonedTimestampDescriptor();
    
        /**
         * Instantiate an object of CustomZonedTimestampDescriptor with Timezone set to "America/Los_Angeles"
         */
        public CustomZonedTimestampDescriptor() {
            this.calendar = Calendar.getInstance(TimeZone.getTimeZone("America/Los_Angeles"));
        }
    
        /**
         * Instantiate an object of CustomZonedTimestampDescriptor
         * @param zone Timezone to be used
         */
        public CustomZonedTimestampDescriptor(TimeZone zone) {
            this.calendar = Calendar.getInstance(zone);
        }
    
        /**
         * Get the binder (setting JDBC in-going parameter values) capable of handling values of the type described by the
         * passed descriptor.
         *
         * @param javaTypeDescriptor The descriptor describing the types of Java values to be bound
         *
         * @return The appropriate binder.
         */
        @Override
        public <X> ValueBinder<X> getBinder(final JavaTypeDescriptor<X> javaTypeDescriptor) {
            return new BasicBinder<X>( javaTypeDescriptor, this ) {
                @Override
                protected void doBind(PreparedStatement st, X value, int index, WrapperOptions options) throws
                        SQLException {
                    st.setTimestamp(index, javaTypeDescriptor.unwrap(value, Timestamp.class, options), calendar);
                }
            };
        }
    
        /**
         * Get the extractor (pulling out-going values from JDBC objects) capable of handling values of the type described
         * by the passed descriptor.
         *
         * @param javaTypeDescriptor The descriptor describing the types of Java values to be extracted
         *
         * @return The appropriate extractor
         */
        @Override
        public <X> ValueExtractor<X> getExtractor(final JavaTypeDescriptor<X> javaTypeDescriptor) {
            return new BasicExtractor<X>( javaTypeDescriptor, this ) {
                @Override
                protected X doExtract(ResultSet rs, String name, WrapperOptions options) throws SQLException {
                    return javaTypeDescriptor.wrap(rs.getTimestamp(name, calendar), options);
                }
    
                @Override
                protected X doExtract(CallableStatement statement, int index, WrapperOptions options) throws SQLException {
                    return javaTypeDescriptor.wrap(statement.getTimestamp(index, calendar), options);
                }
    
                @Override
                protected X doExtract(CallableStatement statement, String name, WrapperOptions options)
                        throws SQLException {
                    return javaTypeDescriptor.wrap(statement.getTimestamp(name, calendar), options);
                }
            };
        }
    
        private final Calendar calendar;
    }
    

    In the above code, Default constructor uses PST Timezone by default. For other TimeZones, simply use the Parameterized Constructor.

  • Implement Type class and use the above Descriptor class
    import com.expedia.www.air.commission.migration.db.descriptors.CustomZonedTimestampDescriptor;
    
    import java.util.Comparator;
    import java.util.Date;
    import java.util.TimeZone;
    
    import org.hibernate.dialect.Dialect;
    import org.hibernate.engine.spi.SessionImplementor;
    import org.hibernate.type.AbstractSingleColumnStandardBasicType;
    import org.hibernate.type.LiteralType;
    import org.hibernate.type.TimestampType;
    import org.hibernate.type.VersionType;
    import org.hibernate.type.descriptor.java.JdbcTimestampTypeDescriptor;
    
    /**
     * A type that maps between {@link java.sql.Types#TIMESTAMP TIMESTAMP} and {@link java.sql.Timestamp} with zone
     */
    public class CustomZonedTimestampType extends AbstractSingleColumnStandardBasicType<Date>
            implements VersionType<Date>, LiteralType<Date> {
        /**
         * Instantiate an object of CustomZonedTimestampType with Timezone set to "America/Los_Angeles"
         */
        public CustomZonedTimestampType() {
            this(CustomZonedTimestampDescriptor.PST_INSTANCE);
        }
    
        /**
         * Instantiate an object of CustomZonedTimestampType
         * @param zone Timezone to be used
         */
        public CustomZonedTimestampType(TimeZone zone) {
            super(new CustomZonedTimestampDescriptor(zone), JdbcTimestampTypeDescriptor.INSTANCE);
        }
    
        /**
         * Returns the abbreviated name of the type.
         * @return String the Hibernate type name
         */
        @Override
        public String getName() {
            return TimestampType.INSTANCE.getName();
        }
    
        /**
         * Convert the value into a string representation, suitable for embedding in an SQL statement as a
         * literal.
         * @param value The value to convert
         * @param dialect The SQL dialect
         * @return The value's string representation
         * @throws Exception Indicates an issue converting the value to literal string.
         */
        @Override
        public String objectToSQLString(Date value, Dialect dialect) throws Exception {
            return TimestampType.INSTANCE.objectToSQLString(value, dialect);
        }
    
        /**
         * Generate an initial version.
         * @param session The session from which this request originates.
         * @return an instance of the type
         */
        @Override
        public Date seed(SessionImplementor session) {
            return TimestampType.INSTANCE.seed(session);
        }
    
        /**
         * Increment the version.
         * @param current the current version
         * @param session The session from which this request originates.
         * @return an instance of the type
         */
        @Override
        public Date next(Date current, SessionImplementor session) {
            return TimestampType.INSTANCE.next(current, session);
        }
    
        /**
         * Get a comparator for version values.
         * @return The comparator to use to compare different version values.
         */
        @Override
        public Comparator<Date> getComparator() {
            return getJavaTypeDescriptor().getComparator();
        }
    }
    
  • Add an annotation @Type on the fields for which proper Timezone has to be used
    @Entity
    public class MyEntityClass implements Serializable {
        public static final String CUSTOMIZED_TIMESTAMP_TYPE = "com.db.types.CustomZonedTimestampType";
    
        public Date getUpdateDate() {
            return updateDate;
        }
    
        public void setUpdateDate(Date updateDate) {
            this.updateDate = updateDate;
        }
    
        @Type(type = CUSTOMIZED_TIMESTAMP_TYPE)
        private Date updateDate;
    }
    

As per the above code, MyEntityClass has a field named upDateDate for which we want to send the date and time in correct Timezone.

ADVANTAGES

  • Reliability: Expected Results from DB are obtained
  • No other application running on a same system is impacted
  • Most importantly, the above steps enable a code to be run on any system set in any TimeZone

 

 

Health Checks : Detection, Reporting, Configuration of Server Instance\Process Health Status


In this article, i will talk about the Running Instance Health, what can represent the Health, how can we detect the health and how can we use this health information to make the System resilient.

Health, basically, defines how well an instance is responding. Health can be:

  • UP
  • DOWN

REAL LIFE PROBLEM
Imagine you reach a Bank and found it being closed. Or, Imagine you are standing in a bank counter queue and waiting to be served. By the time your turn arrives, person sitting at a counter goes away. May be that person is not feeling well.

How would you feel in such a situation? Irritated? Frustrated?
What if you would have been told upfront about this situation? Your time would not have wasted. You would not have felt bad.

But what if someone else takes a job of that counter and start serving you.

Now, imagine a pool of servers hosting a site which allows you to upload a video, say http://www.Youtube.com. You are trying to upload a small video of yours on a site and every time you try to upload, you get some error after sometime and video could not be uploaded.

Basically, Software Applications like http://www.youtube.com run on machines – be it physical or virtual in order to get desired results. Executing these applications require machine’s local resources like memory, cpu, network, disk etc or other external dependencies to get things done.
These resources are limited and executing multiple tasks concurrently put a risk of contention and exhaustion.
It may happen that enough resources are not available for execution and thus the task execution will eventually fail.

In order to make the system Resilient, one of the things that can be done is Proactively determine the Health Status and  report it – to LoadBalancer or to Service Discoverers etc whenever asked, to prevent or deal with the failures.

Reporting a health Status with proper Http Status Codes like 200 for UP and 500 for DOWN can be quite useful.

WHAT CAN DEFINE INSTANCE\PROCESS HEALTH?
Below is a list of some common metrics that can be useful in detecting the health of an instance:

  • Pending Requests
    • Container Level
    • Message Level
  • Latency Overhead – Defined as the TP99 latency added by this application/layer
    • TP99 or TP95 or TP75 as per your Service SLAs
  • Resources
    • % Memory Utilization – Leading towards OOM
    • % CPU Utilization
      • Host Level
      • Process Level
    • Number of Threads
  • Any Business KPI
  • External Dependencies Failures optioanlly

Identifying a list of above criterias is important as well as choosing the correct Threshold or Saturation Values as well.
Too low values or high values can result into system unreliability.

WHY IS IT IMPORTANT?

System is usually expected to be highly available and reliable. High Availability can be achieved through Redundancy where in multiple server instances are running in parallel, processing the requests and thus the demand.

What if One or more instances are running out of resources and thus not able to meet the demand.

Detecting such a state at an appropriate time and taking an action can help in achieving High Availability and Reliability of the System.

It helps in making the system resilient against failures.

ACTIONS ON DETECTING UNHEALTHY

  • REPLENISH thru REBOOT: If you have limited servers pool capacity and cannot increase the capacity, the unhealthy machine has to be restarted\rebooted in order to get it back to healthy state.
  • REPLACE: If you have unlimited server capacity or using Cloud Computing Framework – AWS, Azure, Google Cloud etc, rather than rebooting the machine, you have an option of starting a new machine and killing and removing the old unhealthy machine from processing the requests.

Once an instance is detected unhealthy, instance shall be replenished or replaced.
Either that unhealthy instance shall be rebooted to get it to Healthy state or be replaced with a new server which is put behind LoadBalancer and old being removed from LoadBalancer.

OTHER CONSIDERATIONS

  • Do enable Connection Draining
  • Do configure Connection Draining timeout
  • Enable HealthCheck Response Caching
  • Scale before Declaring UnHealthy
  • Prefer Recent Trend before Declaring UnHealthy – configure unHealthy, healthy Thresholds

These settings prevent the In-Flight requests to be aborted prematurely.
Without these settings, data can be inconsistent state

  • Report Health with Proper Http Status Codes
    • 200 for UP
    • 500 for DOWN

CODE IMPLEMENTATION

Basically, what we need is to peek into current metrics and evaluate the Health as UP or DOWN

So, we need an HealthEvaluator, List of HealthCriteria, Some Operators and Health Definition.

public interface IHealthEvaluator {
    /**
     * Return an indication of health.
     * @return the health after consulting different metrics
     */
    Health health();
}
public final class CompositeMetricBasedHealthEvaluator implements IHealthEvaluator {
    /**
     * Instantiates an object of CompositeMetricBasedHealthEvaluator
     * @param healthCriteriaList List containing Metrics to be used for Health Evaluation
     * @param metricReadersList List containing Metric Readers
     */
    public CompositeMetricBasedHealthEvaluator(List<HealthCriteria<Number>> healthCriteriaList,
                                               List<MetricReader> metricReadersList) {
        this(healthCriteriaList, metricReadersList, null);
    }

    /**
     * Instantiates an object of CompositeMetricBasedHealthEvaluator
     * @param healthCriteriaList List containing Metrics to be used for Health Evaluation
     * @param metricReadersList List containing Metric Readers
     * @param metricsList List containing the Public Metrics
     */
    public CompositeMetricBasedHealthEvaluator(List<HealthCriteria<Number>> healthCriteriaList,
                                               List<MetricReader> metricReadersList,
                                               List<PublicMetrics> metricsList) {
        this.healthCriteriaList = CollectionUtils.isNotEmpty(healthCriteriaList)
                ? ListUtils.unmodifiableList(healthCriteriaList) : ListUtils.EMPTY_LIST;
        this.metricReaderList = metricReadersList;
        this.metricsList = metricsList;
    }

    /**
     * Return an indication of health.
     * @return the health after consulting different metrics
     */
    @Override
    public Health health() {
        Health.Builder curHealth = Health.up();
        Status status = Status.UP;
        for (HealthCriteria healthCriteria : this.healthCriteriaList) {
            String metricName = healthCriteria.getMetricName();
            if (StringUtils.isNotBlank(metricName)) {
                Metric metric = this.getFirstMatchingMetric(metricName);
                if (metric != null) {
                    status = evaluate(healthCriteria, metric);
                    curHealth.withDetail(metricName, String.format("Value:%s, Status:%s", metric.getValue(), status));
                } else {
                    curHealth.withDetail(metricName, Status.UNKNOWN);
                }
            }
        }

        curHealth.status(status);

        return curHealth.build();
    }

    private Metric getFirstMatchingMetric(String name) {
        Object metricProvider = this.selectedMetricProvider.get(name);

        if (metricProvider instanceof MetricReader) {
            return find((MetricReader) metricProvider, name);
        } else if (metricProvider instanceof PublicMetrics) {
            return find((PublicMetrics) metricProvider, name);
        }

        // Preference to use MetricReaders
        if (CollectionUtils.isNotEmpty(this.metricReaderList)) {
            for (MetricReader metricReader : this.metricReaderList) {
                Metric<?> metric = find(metricReader, name);
                    if (metric != null) {
                        this.selectedMetricProvider.put(name, metricReader);
                        return metric;
                    }
            }
        }

        if (CollectionUtils.isNotEmpty(this.metricsList)) {
            for (PublicMetrics publicMetrics : this.metricsList) {
                Metric<?> metric = find(publicMetrics, name);
                if (metric != null) {
                    this.selectedMetricProvider.put(name, publicMetrics);
                    break;
                }
            }
        }

        return null;
    }

    private static Status evaluate(HealthCriteria healthCriteria, Metric metric) {
        int result = compare(metric.getValue(), healthCriteria.getThresholdOrSaturationLevel());
        ComparisonOperator op = healthCriteria.getOperator();

        if ((ComparisonOperator.EQUAL.equals(op) && result != 0) ||
                (ComparisonOperator.LESS_THAN.equals(op) && result >= 0) ||
                (ComparisonOperator.LESS_THAN_EQUAL.equals(op) && result > 0) ||
                (ComparisonOperator.GREATER_THAN.equals(op) && result <= 0) ||
                (ComparisonOperator.GREATER_THAN_EQUAL.equals(op) && result < 0)) {
            return Status.DOWN;
        }

        return Status.UP;
    }

    private static Metric<?> find(MetricReader reader, String name) {
        try {
            return reader.findOne(name);
        } catch (RuntimeException ex) {
            // Ignore the Runtime exceptions
            return null;
        }
    }

    private static Metric<?> find(PublicMetrics source, String name) {
        return (Metric<?>) CollectionUtils.find(source.metrics(),
                (met) -> StringUtils.equalsIgnoreCase(((Metric) met).getName(), name));
    }

    private static int compare(Number n1, Number n2) {
        if (n1 != null && n2 != null) {
            return Double.compare(n1.doubleValue(), n2.doubleValue());
        }

        if (n1 != null) {
            return 1;
        }

        if (n2 != null) {
            return -1; // Even for -ive numbers
        }
        return 0;
    }

    private final List<HealthCriteria<Number>> healthCriteriaList;
    private final List<PublicMetrics> metricsList;
    private final List<MetricReader> metricReaderList;
    private final Map<String, Object> selectedMetricProvider = new HashMap<>();
}

HealthCriteria defines 3 things: what has to be checked, it’s expected value(or a range) and Operator. Value can be integer, float or decimal etc

public class HealthCriteria<TInput extends Number> {
    /**
     * Gets the Operator
     * @return Operator to be used for health evaluation
     */
    public ComparisonOperator getOperator() {
        return operator;
    }

    /**
     * Sets the Operator
     * @param operator Operator to be used for health evaluation
     */
    public void setOperator(ComparisonOperator operator) {
        this.operator = operator;
    }

    /**
     * Gets the Threshold or Saturation value against which health evaluation to be done
     * @return Threshold or Saturation value
     */
    public TInput getThresholdOrSaturationLevel() {
        return thresholdOrSaturationLevel;
    }

    /**
     * Sets the Threshold or Saturation value against which health evaluation to be done
     * @param thresholdOrSaturationLevel Threshold or Saturation value
     */
    public void setThresholdOrSaturationLevel(TInput thresholdOrSaturationLevel) {
        this.thresholdOrSaturationLevel = thresholdOrSaturationLevel;
    }

    /**
     * Gets the name of the metric to be used for health evaluation
     * @return Metric name
     */
    public String getMetricName() {
        return metricName;
    }

    /**
     * Sets the name of the metric to be used for health evaluation
     * @param metricName Metric name
     */
    public void setMetricName(String metricName) {
        this.metricName = metricName;
    }

    private String metricName;
    private TInput thresholdOrSaturationLevel;
    private ComparisonOperator operator;
}

@Configuration
@ConfigurationProperties("healthIndicator")
public class HealthCriteriaList {
    public List<HealthCriteria<Number>> getCriterias() {
        return criterias;
    }

    public void setCriterias(List<HealthCriteria<Number>> criterias) {
        this.criterias = criterias;
    }

    private List<HealthCriteria<Number>> criterias;
}

Some basic Operators that can be supported are:

public enum ComparisonOperator {
    EQUAL,
    LESS_THAN,
    LESS_THAN_EQUAL,
    GREATER_THAN ,
    GREATER_THAN_EQUAL;
}

Using the above code, you can evaluate the Health based on metrics and plug it into any application, be it SPRINGBOOT or DROPWIZARD or CXF etc

SPRINGBOOT ADAPTER like below can be used which can easily plug into and start evaluating the health based on metrics.

public final class MetricBasedSpringBootAdapter implements HealthIndicator {
    /**
     * Instantiates an object of MetricBasedSpringBootAdapter
     * @param healthEvaluator Reference to an instance of IHealthEvaluator impl
     */
    public MetricBasedSpringBootAdapter(IHealthEvaluator healthEvaluator) {
        Assert.notNull(healthEvaluator, "Underlying HealthEvaluator");
        this.underlyingHealthEvaluator = healthEvaluator;
    }

    /**
     * Return an indication of health.
     * @return the health for Server Instance after consulting different metrics
     */
    @Override
    public Health health() {
        return this.underlyingHealthEvaluator.health();
    }

    private final IHealthEvaluator underlyingHealthEvaluator;
}

HOW IT WORKS IN SPRINGBOOT?

Spring Boot includes a number of built-in endpoints.
One of the endpoints is the health endpoint which provides basic application health information.
By default, the health endpoint is mapped to /health

On invoking this endpoint, Health information is collected from all HealthIndicator beans defined in your
ApplicationContext and based on Health Status returned by these HealthIndicators, Aggregated Health Status is returned.

Spring Boot includes a number of auto-configured HealthIndicators and allows to write our own.

Since we keep track of certain metrics in our applications, we wanted an ability to evaluate Health based on certain
Metrics’ values. For e.g., if Number of Thread exceed ‘n’, Health shall be reported as DOWN

For this purpose, CompositeMetricBasedHealthEvaluator is implemented.
It relies on either MetricReaders or PublicMetrics to get the Metrics’s current values and evaluate the
Health accordingly.

It reports the Individual Health of all configured Health indicator Criterias and reports Health as DOWN If any of
them is Down.

For Unavailable Metric, Health cannot be determined and thus reported as UNKNOWN for that specific metric.

STEPS TO ENABLE IN SPRINGBOOT

* Enable Health Endpoint if not enabled already
* Configure custom endpoint name optionally and other parameters like Caching of results etc
* Configure MetricReader(s) and\or PublicMetric(s)
* Configure the HealthIndicator Metric Criterias
* Instantiate CompositeMetricBasedHealthEvaluator
* Inject the MetricReaders and\or PublicMetrics and Criterias configured above
* Instantiate and Inject MetricBasedSpringBootAdapter into Spring Application Context
* Inject CompositeMetricBasedHealthEvaluator while instantiating
* Disable\Enable Auto-Configured HealthIndicators

That’s all need to be done to enable Health Evaluation using Metrics.

HOW TO ENABLE HEALTH ENDPOINT?

One of the ways is to enable it through Application Configuration YAML file.
In your application.yml file, put the following configuration:

endpoints:
health:
enabled: true
time-to-live: 1000

With the above configuration, health point is enabled and also results will be cached for 1000ms.
Default time-to-live = 1000ms.

HOW TO CONFIGURE HEALTH INDICATOR METRIC CRITERIAS?

1) **VIA APPLICATION CONFIGURATION YAML file**

One of the ways is to configure it in Application Configuration YAML file itself.
In your application.yml file, put the following configuration:

healthIndicator:
criterias:
- metricName: threads
thresholdOrSaturationLevel: 100
operator: LESS_THAN
- metricName: anotherMetricNameGoesHere
thresholdOrSaturationLevel: 100.23
operator: ANY_COMPARISON_OPERATOR(EQUAL, LESS_THAN, LESS_THAN_EQUAL, GREATER_THAN, GREATER_THAN_EQUAL)

With the above configuration, 2 Criterias are defined and **HealthCriteriaList** object gets instantiated using
Configuration Annotation.

Here, Thread Criteria specifies that for Health to be **UP**, number of threads < 100.
If NumberOfThreads >= 100, Health will be reported as **DOWN**

Likewise, more criterias can be defined.

Note that
* **metricName** can contain ‘.’ character as well.
* **thresholdOrSaturationLevel** can have any Valid Number, be it Integer or Decimal Number
* **operator** can be any valid value from ComparisonOperator enum.

2) **Same Configuration can be done through code**

List<HealthCriteria<Number>> criterias = new ArrayList<>();

HealthCriteria<Number> criteria = new HealthCriteria<>();
final String expMetricName = "threads";
criteria.setMetricName(expMetricName);
criteria.setThresholdOrSaturationLevel(100);
criteria.setOperator(ComparisonOperator.LESS_THAN);

criterias.add(criteria);

HOW TO PLUGIN MetricBasedSpringBootAdapter?

MetricBasedSpringBootAdapter implements HealthIndicator interface. Thus, simply injecting it into
Spring Application Context will plugin this component for Health Evaluation.

The below configuration instantiates MetricBasedSpringBootAdapter with MetricReaders only.
Both Parameters, healthCriteriaList and metricReaderList are injected automatically through Spring application
context. This happens due to auto configuration.

@Bean
public MetricBasedSpringBootAdapter metricBasedHealthIndicator(
HealthCriteriaList healthCriteriaList,
List<MetricReader> metricReaderList) {
return new MetricBasedSpringBootAdapter(healthCriteriaList.getCriterias(),
metricReaderList);
}

OR,

@Bean
public MetricBasedSpringBootAdapter metricBasedHealthIndicator(
List<HealthCriteria> healthCriteriaList,
List<MetricReader> metricReaderList) {
return new MetricBasedSpringBootAdapter(healthCriteriaList, metricReaderList);
}

OR,

@Bean
public MetricBasedSpringBootAdapter metricBasedHealthIndicator(
HealthCriteriaList healthCriteriaList,
List<MetricReader> metricReaderList,
List<PublicMetrics> publicMetricsList) {
return new MetricBasedSpringBootAdapter(healthCriteriaList.getCriterias(),
metricReaderList, publicMetricsList);
}

The above configuration can be useful wherein MetricReader is not available to read the Metric but Metric is
available publicly through PublicMetrics interface.
With the above configuration, all parameters are injected automatically by Spring.

Things to Note
* Name of Bean minus Suffix HealthIndicator (metricBased) is what is reported as HealthIndicator Name.
* AutoConfiguration of MetricReaders, PublicMetrics or Configuration could be disabled. If this is the case, either
enable AutoConfiguration or manually instantiate MetricReaders, PublicMetrics etc
* PublicMetrics interface can be expensive depending upon the number of metrics being maintained. Use it only if
Custom MetricReader cannot be written or Metrics are small in number.

Data Contracts, XSDs and Redundant List Wrappers – XEW Plugin to rescue


In Service Oriented Architecture (SOA) or MicroServices Architecture, data is exchanged between different components over the network.

Keeping in mind the INTEROPERABILITY, Data Contracts are created and shared.

Contracts either in the form of WSDL or XSDs etc are mutually agreed between the components to exchange the Structured data among them.

As part of these contracts, you may have a need to send a collection of similar data and for this purpose you may have defined different complexTypes in your xsd.

This article talks about problem associated with defining List Complex Types, how can we overcome this problem using XEW Plugin and the benefits.

Consider you want to exchange a list of AirSegments under an Itinerary like:


    

        

            
AC
            
12
        

        

            
AC
            
13
        

        

            
AC
            
189
        

    

 

To accomplish this, you will define Something like below:

<xs:complexType name="OriginDestinationBookedType">
    
 <xs:element name="AirSegmentBookedList" type="SegmentBookedListType"/>
    
  


<xs:complexType name="SegmentBookedListType">
    
 <xs:element maxOccurs="unbounded" name="AirSegmentBooked" type="SegmentBookedType"/>
    
  


  <xs:complexType name="SegmentBookedType">
    
      <xs:element name="CarrierCode" type="CarrierCodeType"/>
      <xs:element name="FlightNumber" type="FlightNumberType"/>
    
  

This looks good. Looks Good until we generate the Proxy classes out of these Contracts. Give it a try to generate the classes out of these XSDs using Plugins: JAXB-MAVEN, CXF etc.

You will notice that 3 proxy classes get generated.

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "OriginDestinationBookedType", propOrder = {
"segmentBookedList"
})
public class OriginDestinationBookedType {
@XmlElement(name = "SegmentBookedList", required = true)
protected SegmentBookedListType segmentBookedList;
public SegmentBookedListType getSegmentBookedList() {
return segmentBookedList;
}
public void setSegmentBookedList(SegmentBookedListType value) {
this.segmentBookedList = value;
}
}

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "SegmentBookedListType", propOrder = {
"segmentBookeds"
})
public class SegmentBookedListType {
@XmlElement(name = "SegmentBooked", required = true)
protected List segmentBookeds;
public List getSegmentBookeds() {
if (segmentBookeds == null) {
segmentBookeds = new ArrayList();
}
return this.segmentBookeds;
}
}

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "SegmentBookedType", propOrder = {
"carrierCode",
"flightNumber"
})
public class SegmentBookedType {
@XmlElement(name = "CarrierCode", required = true)
protected String carrierCode;
@XmlElement(name = "FlightNumber", required = true)
protected String flightNumber;
}

With the above classes, if you want to get an access to a Segment within an OD, you will have to write:

OriginDestinationBookedType od; // Initialized properly and you have a non-null reference
od.getSegmentBookedList().getSegmentBookeds().get(segIndex);

Bold part above is redundant and not needed for sure. Instead, we want to have:

od.getSegmentBookeds().get(segIndex);

How can we directly get a list of segments under an OD?

Solution
Integrate XEW Plugin into your repository and get it executed during Code generation phase.
Simply,

 

        org.jvnet.jaxb2.maven2
        maven-jaxb2-plugin
        0.13.1
        
            
                org.jvnet.jaxb2_commons
                jaxb2-basics
                0.6.3
            
        
        
            
                air-ticket-schema
                
                      generate
                
                
                    true
                    
                        -Xannotate
                        -Xxew
                        -Xxew:control ${basedir}/src/main/resources/xsds/xewInclusionExclusion.txt
                    
                    
                        
                            org.jvnet.jaxb2_commons
                            jaxb2-basics-annotate
                            1.0.2
                        
                        
                            com.github.jaxb-xew-plugin
                            jaxb-xew-plugin
                            1.9
                        
                        
                            com.sun.xml.bind
                            jaxb-xjc
                            2.2.11
                        
                    
                    
                        -Djavax.xml.accessExternalSchema=all
                    
                    ${basedir}/src/main/resources/xsds
                    
                        yourXSDsHere.xsd
                    
                    ${basedir}/target/generated-sources
                    ${basedir}/src/main/resources/xsds
                    
                        bindings.xjb
                    
                      false
                      false
                      true
                
              
        
    

 

With the above configuration, only 2 classes will be generated.

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "OriginDestinationBookedType", propOrder = {
"segmentBookedList"
})
public class OriginDestinationBookedType {
@XmlElement(name = "SegmentBookedList", required = true)
protected List segmentBookedList;
public List getSegmentBookedList() {
return segmentBookedList;
}
public void setSegmentBookedList(List value) {
this.segmentBookedList = value;
}
}

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "SegmentBookedType", propOrder = {
"carrierCode",
"flightNumber"
})
public class SegmentBookedType {
@XmlElement(name = "CarrierCode", required = true)
protected String carrierCode;
@XmlElement(name = "FlightNumber", required = true)
protected String flightNumber;
}

And you are all set. No cursing on XSDs 🙂

ADVANTAGES

    No more List wrapper classes.No more additional clumsy code
    No redundant Null checks
    More readability
    Less Machine Instructions to execute
    Less Memory Footprint of Virtual Functions Table
    More maintainability