Wrath of Cache Miss : Protect against Cache Stampede (Part – 1)


All of us are pretty familiar with STAMPEDE, a real world situation. FATAL, leads to losses including LOSS of Lives.

A situation when suddenly group of living beings rush towards single direction, and start crushing whatever comes in the way including the other living beings which formed that situation.

How does it sound to you?

Scary? Dangerous? Isn’t it?

Imagine the same situation occuring within your software systems and bringing your system(s) down due to mad rush.

Very much like detecting Single Point of Failures within software architecture, it’s also important to understand these issues. Identify the issues that may happen due to a miss in the Cache. This can lead to choking your Storage system.

What is Cache Stampede?

While it is somewhat similar to a real world stampede and occurs in software systems, let’s first understand some key concepts.

What is Caching? What is a Cache Miss? What is the purpose of having Caching within the system?

Imagine the drinking water stored in the kitchen. Everytime you want to drink water, you would have to go to the kitchen and drink.
For lazy person like me, this is crazy.
I would prefer to store water in my bottle which i can keep on my desk and drink from it whenever needed. And when bottle gets empty, i would re-fill it from the Kitchen store.

So, here, in some way, bottle is acting as a Cache and Kitchen store as database :)

And, process of filling data from Kitchen Store to Water bottle is filling the Cache with data for later use.

So, what is Caching?

A strategy and solution to quickly access frequently used data. A Data which may not change so often.

In many software applications, you would have noticed use of some data Structures’ classes like HASH, MAP or DICTIONARY etc.

These are all the constructs to keep a local copy within process memory.

and, Why do we need Cache?

To get quick access to that data. This not only reduces processing time, it also prevents unnecessary load on your storage system(s) like Database etc. and prevent database becoming bottleneck. Caching can thus help in improving the overall throughput of your application or system.

What’s the catch?

Remember, I talked about process of filling the water bottle from Kitchen Store?

That’s how water bottle will get water. There is no magic that will keep on filling/re-filling water bottle.
You have to determine that there is no water in it and the need to fill it.

Likewise, cache will not have data on it’s own. There will be someway through which the required data will get into it. [There are many strategies and would not be covering here]. Ultimately your application will first try to read the data from cache, finds it missing, gets the data from Database and puts it into Cache.

Now, imagine, other family members, friends or colleagues in your company who would like to also keep the water bottle near to them for easy and quick access to douse the thirst.

They will all go to Kitchen store and fill their water bottles.
What’s the big deal there?

Right, if all of you are going at different times, it will not be problematic et all. No QUEUEING in the kitchen store.

But, what if all of you go to Kitchen store at roughly the same time or near the same time?

You will end up in the Queue depending upon the size of the bottle, the other person in front of you is carrying, depending upon the outflow\speed of the Kitchen Store etc.

Likewise, imagine in your application, many users, requests come at the same time, all of them find the data not present in the cache. All those requests will rush to your Storage system such as Database to get the data at the same time. This will create a situation in which your database system will get bombarded by the large number of requests to get the data and can result into either latencies / slowness or even unavailability of your Database and your application.

This is Cache Stampede, where in, application requests resembling bulls :) in the real world, rush towards database.

And, this requires careful consideration and protection. Otherwise, the prupose of using cache gets defeated.

Situations : When can this happen?

This may work for your system where there are not many simultaneous, concurrent requests or transactions looking for that same data.

Or, that data to be fetched from Storage is pretty small in size, has a low memory footprint and processing needs.

But, what if those are not applicable to your system?

What if the static data your application needs is quite big, ranging in few hundreds of MBs?

What if your application supports very high Requests per second as opposed to an external system from where it sources the data ?

Imagine you have built an E-commerce site like Amazon or Flipkart or Expedia where your system needs Product information and millions of active users at any time.
Imagine You are preparing for big event, say , black Friday or Diwali or some product launch, and millions of concurrent active users are just waiting for sale to open and book.

All of them start shopping for the product, requests come to the system, system sees that product information is not there in the cache due to any reasons such as TTL (Time To Live) expired, eviction happened etc and has to be loaded from the data store.

All requests now have to be served from the database. This can result into:
a) Data store getting overwhelmed with those many requests and may result into failures or start throttling.
RISK : Database Performance degradation or unavailability

b) With some resiliency built-in, your system may have the capability to retry with some wait period baked-in.
RISK : Latency. Requests take longer time to complete. User experience degrades

c) Lets assume that somehow database is scaled and all those million requests got successful in loading the data from database into memory. However, data size is big, say 1 MB.

With millions of concurrent active users, lets assume that we have requests = 10^6 requests being served by all Servers.

this results into memory requirements of atleast 1 MB * 10 ^6
= 10^3 * 10 ^ 3 MB
= 10 ^ 3 GB (roughly)

This means that even when your system cluster is provisioned with 100 servers and load balancing in place, each server would end up consuming atleast 10 GB (10 ^ 3 GB / 100) at some point of time just to populate the cache.

This happened just for 1 product which is just 1MB big.
Imagine What will happen, when no of products increase, when size of the product increases?

Where did we built Cache Stampede Protection?

One of the products my team built is Azure Migrate. It provides the Customers with SKU recommendations. It also shows what the total cost would be as per those recommendations.

New SKUs may get added on any day, existing SKUs may get deprecated on any day.
Likewise, Prices for those SKUs may change on any day.

SKUs and prices information are being maintained by the other teams (following separation of concerns :)) and our system integrates with those systems.

There were multi thousands of SKUs available and multi millions of prices’ data points. Imagine different offers running, different regions across the world where things will be different. All of this contribute to the large size, hundreds of MBs of data (~250 MB).

Our product and thus system has SLOs/SLAs too like any other system and our product supports assessing multi thousands of resources such as VMs, Databases, storage disks etc at any time. Imagine, Enterprise running 30-40K machines which has 60-90k disks attached to them and multi thousands other resources which need to be assessed and recommendations need to be generated.

Now imagine such tens or hundreds or thousands of those enterprises.

As you would have thought of, you would think of caching.
Caching both SKUs data , prices data and refresh once a day to keep the latest data.

Now imagine, having TTL in place and the whole of SKUs data and prices data getting expired every day at some time and 250MB of data needs to be reloaded into cache.

250 MB / request * (30K + 90K) resources in the request * 10 ^3 requests. All these resources getting assessed concurrently in batches without affecting system SLOs/SLAs.

This can easily cause LOW or ZERO Memory availability in the whole cluster and bring the system down.

You may ask, I got the problem. How can i prevent this problem to occur?
What can i do to make system resilient and be still able to perform and do what it is supposed to do?

Solution : Remember the common and famous saying : Prevention is better than Cure.

Yes, Protect against Cache Stampede. Prevent stampede to happen.

There are again different strategies to protect against Cache Stampede. Each strategy has some pros and cons.

Strategy 1: Prevent keys expiration at the same time. Add Jittering to cache keys. Very simple strategy. Non blocking, no stale data. However, Hot Keys (single key being used a lot of times, think of static data) would still be problematic.

Strategy 2: Remember locks? Define data access procedure from Storage as area \ block of mutual exclusion and prevent simultaneous access.

Sounds simple. However, Locks inherently limit throughput, introduce latencies and are recommended to be avoided wherever possible.

Strategy 3: Refresh data async in other thread. Basically, application keeps a track of what and when data may get expired and refreshes the data asynchronously in other thread(s). This may result into surfacing the stale data and requires proper tracking and parallel processing and thus multi-threading.

Strategy 4: Another could be an hybrid of Strategy 3 and 4. Data is indeed refreshed async but done when the data is accessed and thus during actual request processing. Multiple requests can detect the need to refresh the data. Therefore, there must be a way to ensure the data is refreshed through one request. Other requests should wait until the data gets refreshed. Again the implementation could be to return the last fetched data till data gets refreshed.
This can be complex, requires thorough understanding and detection of deadlocks, can use Atomic Lock strategies.

Based on your application needs like it’s fine to surface the stale data for time being, hot keys pattern etc, you can choose the strategy.

In the next blog, I can cover the implementation of the above strategies. Please leave the comment and the strategy for which you would like me to cover the implementation.

← Back

Thank you for your response. ✨

Message Queuing Frameworks – Comparison Grid


We often come with requirements which are suited for integrating Messaging Frameworks in the Software Systems.
There are many messaging frameworks available in the market – some are open-source, some are paid-licensed, some provide great support, have good Community support.

In order to make an apt choice, we look out and explore different messaging frameworks based on our requirements.

This post compares few Popular Messaging Frameworks and aims to provide or equip you with enough information to make a decision on choosing the best framework as per your requirements.

COMPARISON GRID

RabbitMQ Apache Kafka AWS SQS
HA ☑ Requires some extra work and may require 3rd party Plugins like Shovel and Federation ☑ Out of the Box (OOB) ☑ OOB
Scalable
Guaranteed Delivery ☑ Supports Consumer Acknowledgments ☑ Supports Consumer Acknowledgments ☑ Supports Consumer Acknowledgments
Durable ☑ Through Disk Nodes and Queues with extra configuration ☑ OOB ☑ Message Retention upto 14 days max and default being 4 days.
Exactly-Once Delivery ☑ Annotates a message with redelivered when message was delivered earlier but consumer ack failed earlier. Requires Idempotent behavior of a Consumer ☑ Dependent on Consumer behavior. Consumer is made responsible to track Offsets (messages read so far) and store those offsets. Kafka started supporting storing offsets within Kafka itself. It supports storing Offsets OOB through HIGH LEVEL CONSUMERS, however Requires Idempotent behavior of a Consumer ☑ MessageDeDup ID and MessageGroupID attributes are used. Requires Idempotent behavior of a Consumer. FIFO supports Exactly-once while Standard Queues support Atleast-Once
Ease of Deployment ☑ For Distributed Topology, Requires more effort and 3rd party Plugins ☑ Requires ZooKeeper ☑ Managed by AWS
Authentication Support ☑ OOB ☑ OOB ☑ OOB
Authorization aka Acl Support ☑ OOB ☑ OOB ☑ OOB
TLS Support ☑ OOB ☑ OOB ☑ OOB
Non-Bocking Producers ☑ Supports both Synchronous and Async
Performant :star: :star: Medium to High :star: :star: :star: :star: Very High :star: :star: :star: :star: Very High FIFO: 300 tps Standard Queues: Unlimited
Open Source  ☑  ☑
Load Balancing Across Consumers ☑ Can be done Through Consumer Groups ☑ Multiple Consumers can read from the same queue in an atomic way
Delay Queues NOT OOB NOT OOB ☑ OOB
Visibility Timeout Queues NOT OOB NOT OOB ☑ OOB
Message Dedup  ☑
Message Size Limits Upto 256 KB. AWS SDK supports storing large messages in S3 etc though.
No of Messages in a Queue ☑ No limits ☑ No Limits ☑ No Limits but Standard Queues: 1,20,000 In-Flight Messages FIFO: 20,000 In-Flight Messages Details here and here Messages are In-Flight after they have been received from the queue by a consuming component, but have not yet been deleted from the queue
Message Content Limits ☑ No Limits ☑ No Limits A message can include only XML, JSON, and unformatted text. The following Unicode characters are allowed: #x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000to #x10FFFF Any characters not included in this list are rejected.
Disaster Recovery Not OOB Not OOB but simple. Replicas can be deployed across regions Not OOB and simple. Requires different Strategies to achieve it.

← Back

Thank you for your response. ✨

Health Checks : Detection, Reporting, Configuration of Server Instance\Process Health Status


In this article, i will talk about the Running Instance Health, what can represent the Health, how can we detect the health and how can we use this health information to make the System resilient.

Health, basically, defines how well an instance is responding. Health can be:

  • UP
  • DOWN

REAL LIFE PROBLEM
Imagine you reach a Bank and found it being closed. Or, Imagine you are standing in a bank counter queue and waiting to be served. By the time your turn arrives, person sitting at a counter goes away. May be that person is not feeling well.

How would you feel in such a situation? Irritated? Frustrated?
What if you would have been told upfront about this situation? Your time would not have wasted. You would not have felt bad.

But what if someone else takes a job of that counter and start serving you.

Now, imagine a pool of servers hosting a site which allows you to upload a video, say http://www.Youtube.com. You are trying to upload a small video of yours on a site and every time you try to upload, you get some error after sometime and video could not be uploaded.

Basically, Software Applications like http://www.youtube.com run on machines – be it physical or virtual in order to get desired results. Executing these applications require machine’s local resources like memory, cpu, network, disk etc or other external dependencies to get things done.
These resources are limited and executing multiple tasks concurrently put a risk of contention and exhaustion.
It may happen that enough resources are not available for execution and thus the task execution will eventually fail.

In order to make the system Resilient, one of the things that can be done is Proactively determine the Health Status and  report it – to LoadBalancer or to Service Discoverers etc whenever asked, to prevent or deal with the failures.

Reporting a health Status with proper Http Status Codes like 200 for UP and 500 for DOWN can be quite useful.

WHAT CAN DEFINE INSTANCE\PROCESS HEALTH?
Below is a list of some common metrics that can be useful in detecting the health of an instance:

  • Pending Requests
    • Container Level
    • Message Level
  • Latency Overhead – Defined as the TP99 latency added by this application/layer
    • TP99 or TP95 or TP75 as per your Service SLAs
  • Resources
    • % Memory Utilization – Leading towards OOM
    • % CPU Utilization
      • Host Level
      • Process Level
    • Number of Threads
  • Any Business KPI
  • External Dependencies Failures optioanlly

Identifying a list of above criterias is important as well as choosing the correct Threshold or Saturation Values as well.
Too low values or high values can result into system unreliability.

WHY IS IT IMPORTANT?

System is usually expected to be highly available and reliable. High Availability can be achieved through Redundancy where in multiple server instances are running in parallel, processing the requests and thus the demand.

What if One or more instances are running out of resources and thus not able to meet the demand.

Detecting such a state at an appropriate time and taking an action can help in achieving High Availability and Reliability of the System.

It helps in making the system resilient against failures.

ACTIONS ON DETECTING UNHEALTHY

  • REPLENISH thru REBOOT: If you have limited servers pool capacity and cannot increase the capacity, the unhealthy machine has to be restarted\rebooted in order to get it back to healthy state.
  • REPLACE: If you have unlimited server capacity or using Cloud Computing Framework – AWS, Azure, Google Cloud etc, rather than rebooting the machine, you have an option of starting a new machine and killing and removing the old unhealthy machine from processing the requests.

Once an instance is detected unhealthy, instance shall be replenished or replaced.
Either that unhealthy instance shall be rebooted to get it to Healthy state or be replaced with a new server which is put behind LoadBalancer and old being removed from LoadBalancer.

OTHER CONSIDERATIONS

  • Do enable Connection Draining
  • Do configure Connection Draining timeout
  • Enable HealthCheck Response Caching
  • Scale before Declaring UnHealthy
  • Prefer Recent Trend before Declaring UnHealthy – configure unHealthy, healthy Thresholds

These settings prevent the In-Flight requests to be aborted prematurely.
Without these settings, data can be inconsistent state

  • Report Health with Proper Http Status Codes
    • 200 for UP
    • 500 for DOWN

CODE IMPLEMENTATION

Basically, what we need is to peek into current metrics and evaluate the Health as UP or DOWN

So, we need an HealthEvaluator, List of HealthCriteria, Some Operators and Health Definition.

public interface IHealthEvaluator {
    /**
     * Return an indication of health.
     * @return the health after consulting different metrics
     */
    Health health();
}
public final class CompositeMetricBasedHealthEvaluator implements IHealthEvaluator {
    /**
     * Instantiates an object of CompositeMetricBasedHealthEvaluator
     * @param healthCriteriaList List containing Metrics to be used for Health Evaluation
     * @param metricReadersList List containing Metric Readers
     */
    public CompositeMetricBasedHealthEvaluator(List<HealthCriteria<Number>> healthCriteriaList,
                                               List<MetricReader> metricReadersList) {
        this(healthCriteriaList, metricReadersList, null);
    }

    /**
     * Instantiates an object of CompositeMetricBasedHealthEvaluator
     * @param healthCriteriaList List containing Metrics to be used for Health Evaluation
     * @param metricReadersList List containing Metric Readers
     * @param metricsList List containing the Public Metrics
     */
    public CompositeMetricBasedHealthEvaluator(List<HealthCriteria<Number>> healthCriteriaList,
                                               List<MetricReader> metricReadersList,
                                               List<PublicMetrics> metricsList) {
        this.healthCriteriaList = CollectionUtils.isNotEmpty(healthCriteriaList)
                ? ListUtils.unmodifiableList(healthCriteriaList) : ListUtils.EMPTY_LIST;
        this.metricReaderList = metricReadersList;
        this.metricsList = metricsList;
    }

    /**
     * Return an indication of health.
     * @return the health after consulting different metrics
     */
    @Override
    public Health health() {
        Health.Builder curHealth = Health.up();
        Status status = Status.UP;
        for (HealthCriteria healthCriteria : this.healthCriteriaList) {
            String metricName = healthCriteria.getMetricName();
            if (StringUtils.isNotBlank(metricName)) {
                Metric metric = this.getFirstMatchingMetric(metricName);
                if (metric != null) {
                    status = evaluate(healthCriteria, metric);
                    curHealth.withDetail(metricName, String.format("Value:%s, Status:%s", metric.getValue(), status));
                } else {
                    curHealth.withDetail(metricName, Status.UNKNOWN);
                }
            }
        }

        curHealth.status(status);

        return curHealth.build();
    }

    private Metric getFirstMatchingMetric(String name) {
        Object metricProvider = this.selectedMetricProvider.get(name);

        if (metricProvider instanceof MetricReader) {
            return find((MetricReader) metricProvider, name);
        } else if (metricProvider instanceof PublicMetrics) {
            return find((PublicMetrics) metricProvider, name);
        }

        // Preference to use MetricReaders
        if (CollectionUtils.isNotEmpty(this.metricReaderList)) {
            for (MetricReader metricReader : this.metricReaderList) {
                Metric<?> metric = find(metricReader, name);
                    if (metric != null) {
                        this.selectedMetricProvider.put(name, metricReader);
                        return metric;
                    }
            }
        }

        if (CollectionUtils.isNotEmpty(this.metricsList)) {
            for (PublicMetrics publicMetrics : this.metricsList) {
                Metric<?> metric = find(publicMetrics, name);
                if (metric != null) {
                    this.selectedMetricProvider.put(name, publicMetrics);
                    break;
                }
            }
        }

        return null;
    }

    private static Status evaluate(HealthCriteria healthCriteria, Metric metric) {
        int result = compare(metric.getValue(), healthCriteria.getThresholdOrSaturationLevel());
        ComparisonOperator op = healthCriteria.getOperator();

        if ((ComparisonOperator.EQUAL.equals(op) && result != 0) ||
                (ComparisonOperator.LESS_THAN.equals(op) && result >= 0) ||
                (ComparisonOperator.LESS_THAN_EQUAL.equals(op) && result > 0) ||
                (ComparisonOperator.GREATER_THAN.equals(op) && result <= 0) ||
                (ComparisonOperator.GREATER_THAN_EQUAL.equals(op) && result < 0)) {
            return Status.DOWN;
        }

        return Status.UP;
    }

    private static Metric<?> find(MetricReader reader, String name) {
        try {
            return reader.findOne(name);
        } catch (RuntimeException ex) {
            // Ignore the Runtime exceptions
            return null;
        }
    }

    private static Metric<?> find(PublicMetrics source, String name) {
        return (Metric<?>) CollectionUtils.find(source.metrics(),
                (met) -> StringUtils.equalsIgnoreCase(((Metric) met).getName(), name));
    }

    private static int compare(Number n1, Number n2) {
        if (n1 != null && n2 != null) {
            return Double.compare(n1.doubleValue(), n2.doubleValue());
        }

        if (n1 != null) {
            return 1;
        }

        if (n2 != null) {
            return -1; // Even for -ive numbers
        }
        return 0;
    }

    private final List<HealthCriteria<Number>> healthCriteriaList;
    private final List<PublicMetrics> metricsList;
    private final List<MetricReader> metricReaderList;
    private final Map<String, Object> selectedMetricProvider = new HashMap<>();
}

HealthCriteria defines 3 things: what has to be checked, it’s expected value(or a range) and Operator. Value can be integer, float or decimal etc

public class HealthCriteria<TInput extends Number> {
    /**
     * Gets the Operator
     * @return Operator to be used for health evaluation
     */
    public ComparisonOperator getOperator() {
        return operator;
    }

    /**
     * Sets the Operator
     * @param operator Operator to be used for health evaluation
     */
    public void setOperator(ComparisonOperator operator) {
        this.operator = operator;
    }

    /**
     * Gets the Threshold or Saturation value against which health evaluation to be done
     * @return Threshold or Saturation value
     */
    public TInput getThresholdOrSaturationLevel() {
        return thresholdOrSaturationLevel;
    }

    /**
     * Sets the Threshold or Saturation value against which health evaluation to be done
     * @param thresholdOrSaturationLevel Threshold or Saturation value
     */
    public void setThresholdOrSaturationLevel(TInput thresholdOrSaturationLevel) {
        this.thresholdOrSaturationLevel = thresholdOrSaturationLevel;
    }

    /**
     * Gets the name of the metric to be used for health evaluation
     * @return Metric name
     */
    public String getMetricName() {
        return metricName;
    }

    /**
     * Sets the name of the metric to be used for health evaluation
     * @param metricName Metric name
     */
    public void setMetricName(String metricName) {
        this.metricName = metricName;
    }

    private String metricName;
    private TInput thresholdOrSaturationLevel;
    private ComparisonOperator operator;
}

@Configuration
@ConfigurationProperties("healthIndicator")
public class HealthCriteriaList {
    public List<HealthCriteria<Number>> getCriterias() {
        return criterias;
    }

    public void setCriterias(List<HealthCriteria<Number>> criterias) {
        this.criterias = criterias;
    }

    private List<HealthCriteria<Number>> criterias;
}

Some basic Operators that can be supported are:

public enum ComparisonOperator {
    EQUAL,
    LESS_THAN,
    LESS_THAN_EQUAL,
    GREATER_THAN ,
    GREATER_THAN_EQUAL;
}

Using the above code, you can evaluate the Health based on metrics and plug it into any application, be it SPRINGBOOT or DROPWIZARD or CXF etc

SPRINGBOOT ADAPTER like below can be used which can easily plug into and start evaluating the health based on metrics.

public final class MetricBasedSpringBootAdapter implements HealthIndicator {
    /**
     * Instantiates an object of MetricBasedSpringBootAdapter
     * @param healthEvaluator Reference to an instance of IHealthEvaluator impl
     */
    public MetricBasedSpringBootAdapter(IHealthEvaluator healthEvaluator) {
        Assert.notNull(healthEvaluator, "Underlying HealthEvaluator");
        this.underlyingHealthEvaluator = healthEvaluator;
    }

    /**
     * Return an indication of health.
     * @return the health for Server Instance after consulting different metrics
     */
    @Override
    public Health health() {
        return this.underlyingHealthEvaluator.health();
    }

    private final IHealthEvaluator underlyingHealthEvaluator;
}

HOW IT WORKS IN SPRINGBOOT?

Spring Boot includes a number of built-in endpoints.
One of the endpoints is the health endpoint which provides basic application health information.
By default, the health endpoint is mapped to /health

On invoking this endpoint, Health information is collected from all HealthIndicator beans defined in your
ApplicationContext and based on Health Status returned by these HealthIndicators, Aggregated Health Status is returned.

Spring Boot includes a number of auto-configured HealthIndicators and allows to write our own.

Since we keep track of certain metrics in our applications, we wanted an ability to evaluate Health based on certain
Metrics’ values. For e.g., if Number of Thread exceed ‘n’, Health shall be reported as DOWN

For this purpose, CompositeMetricBasedHealthEvaluator is implemented.
It relies on either MetricReaders or PublicMetrics to get the Metrics’s current values and evaluate the
Health accordingly.

It reports the Individual Health of all configured Health indicator Criterias and reports Health as DOWN If any of
them is Down.

For Unavailable Metric, Health cannot be determined and thus reported as UNKNOWN for that specific metric.

STEPS TO ENABLE IN SPRINGBOOT

* Enable Health Endpoint if not enabled already
* Configure custom endpoint name optionally and other parameters like Caching of results etc
* Configure MetricReader(s) and\or PublicMetric(s)
* Configure the HealthIndicator Metric Criterias
* Instantiate CompositeMetricBasedHealthEvaluator
* Inject the MetricReaders and\or PublicMetrics and Criterias configured above
* Instantiate and Inject MetricBasedSpringBootAdapter into Spring Application Context
* Inject CompositeMetricBasedHealthEvaluator while instantiating
* Disable\Enable Auto-Configured HealthIndicators

That’s all need to be done to enable Health Evaluation using Metrics.

HOW TO ENABLE HEALTH ENDPOINT?

One of the ways is to enable it through Application Configuration YAML file.
In your application.yml file, put the following configuration:

endpoints:
health:
enabled: true
time-to-live: 1000

With the above configuration, health point is enabled and also results will be cached for 1000ms.
Default time-to-live = 1000ms.

HOW TO CONFIGURE HEALTH INDICATOR METRIC CRITERIAS?

1) **VIA APPLICATION CONFIGURATION YAML file**

One of the ways is to configure it in Application Configuration YAML file itself.
In your application.yml file, put the following configuration:

healthIndicator:
criterias:
- metricName: threads
thresholdOrSaturationLevel: 100
operator: LESS_THAN
- metricName: anotherMetricNameGoesHere
thresholdOrSaturationLevel: 100.23
operator: ANY_COMPARISON_OPERATOR(EQUAL, LESS_THAN, LESS_THAN_EQUAL, GREATER_THAN, GREATER_THAN_EQUAL)

With the above configuration, 2 Criterias are defined and **HealthCriteriaList** object gets instantiated using
Configuration Annotation.

Here, Thread Criteria specifies that for Health to be **UP**, number of threads < 100.
If NumberOfThreads >= 100, Health will be reported as **DOWN**

Likewise, more criterias can be defined.

Note that
* **metricName** can contain ‘.’ character as well.
* **thresholdOrSaturationLevel** can have any Valid Number, be it Integer or Decimal Number
* **operator** can be any valid value from ComparisonOperator enum.

2) **Same Configuration can be done through code**

List<HealthCriteria<Number>> criterias = new ArrayList<>();

HealthCriteria<Number> criteria = new HealthCriteria<>();
final String expMetricName = "threads";
criteria.setMetricName(expMetricName);
criteria.setThresholdOrSaturationLevel(100);
criteria.setOperator(ComparisonOperator.LESS_THAN);

criterias.add(criteria);

HOW TO PLUGIN MetricBasedSpringBootAdapter?

MetricBasedSpringBootAdapter implements HealthIndicator interface. Thus, simply injecting it into
Spring Application Context will plugin this component for Health Evaluation.

The below configuration instantiates MetricBasedSpringBootAdapter with MetricReaders only.
Both Parameters, healthCriteriaList and metricReaderList are injected automatically through Spring application
context. This happens due to auto configuration.

@Bean
public MetricBasedSpringBootAdapter metricBasedHealthIndicator(
HealthCriteriaList healthCriteriaList,
List<MetricReader> metricReaderList) {
return new MetricBasedSpringBootAdapter(healthCriteriaList.getCriterias(),
metricReaderList);
}

OR,

@Bean
public MetricBasedSpringBootAdapter metricBasedHealthIndicator(
List<HealthCriteria> healthCriteriaList,
List<MetricReader> metricReaderList) {
return new MetricBasedSpringBootAdapter(healthCriteriaList, metricReaderList);
}

OR,

@Bean
public MetricBasedSpringBootAdapter metricBasedHealthIndicator(
HealthCriteriaList healthCriteriaList,
List<MetricReader> metricReaderList,
List<PublicMetrics> publicMetricsList) {
return new MetricBasedSpringBootAdapter(healthCriteriaList.getCriterias(),
metricReaderList, publicMetricsList);
}

The above configuration can be useful wherein MetricReader is not available to read the Metric but Metric is
available publicly through PublicMetrics interface.
With the above configuration, all parameters are injected automatically by Spring.

Things to Note
* Name of Bean minus Suffix HealthIndicator (metricBased) is what is reported as HealthIndicator Name.
* AutoConfiguration of MetricReaders, PublicMetrics or Configuration could be disabled. If this is the case, either
enable AutoConfiguration or manually instantiate MetricReaders, PublicMetrics etc
* PublicMetrics interface can be expensive depending upon the number of metrics being maintained. Use it only if
Custom MetricReader cannot be written or Metrics are small in number.

← Back

Thank you for your response. ✨

Is Binary Tree a Binary Search Tree?


2. Is tree BinarySearchTree?

Problem:

Given a binary tree, determine if it is a Binary Search Tree (BST) or not?

Definition:

What is BST?

BST is a binary tree in which value of root is always greater than the value of every node on it’s left and is less than or equal to the value of every node on it’s right.

Solution:

This implementation is done using C#.NET.

class BinaryTree
{
public BinaryTreeNode Root { get; set; }

public bool IsBinarySearchTree()
{
Console.WriteLine("Checking if Tree is BST or not:");

if (this.Root != null)
{
int value = 0;

return this.Check(this.Root, ref value);
}

return true;
}

private bool Check(BinaryTreeNode currentNode, ref int lastNodeValue)
{
bool isTreeBST = false, leftTreePresent, rightTreePrsent ;

leftTreePresent = currentNode.LeftTree == null ? false : true;
rightTreePrsent = currentNode.RightTree == null ? false : true;

if (leftTreePresent)
{
isTreeBST = this.Check(currentNode.LeftTree, ref lastNodeValue);
}
else
{
isTreeBST = true;
}

if (isTreeBST && currentNode.Info > lastNodeValue)
{
Console.WriteLine("Processing Node With Value:{0}", currentNode.Info);

lastNodeValue = currentNode.Info;

isTreeBST = true;
}
else
{
isTreeBST = false;
}

if (isTreeBST && rightTreePrsent)
{
isTreeBST = this.Check(currentNode.RightTree, ref lastNodeValue);
}

return isTreeBST;
}
}

class BinaryTreeNode
{
public BinaryTreeNode LeftTree { get; set; }
public BinaryTreeNode RightTree { get; set; }
public int Info { get; set; }
}

Problem with the above code is that if a tree has Duplicate Values, it will fail.

The approach could be then to pass the Range in terms of Minimum and the Maximum Value of a particular node. Since we are traversing down from Root and knowing the min and max value of a root, we can appropriately limit the range and pass on to Left and Right Trees.


private bool Check(BinaryTreeNode node, int min, int max)
{
if (node == null)
return true;

if (node.Info max)
return false;
else
{
return this.Check(node.LeftTree, min, node.Info) && this.Check(node.RightTree, node.Info + 1, max);
}
}

Setting Custom Permission Levels in Sharepoint Programmatically


Before going into How part, lets first understand What is a Permission Level (known as Site Groups prior to WSS 3.0) in Sharepoint? Why do we need it?

What is a Permission Level?
It is a group or set of permissions\actions. By action, i mean that what a particular user is allowed to do in an application.
These permissions can then be assigned to a user or a group of users to allow\restrict certain actions based upon his role in the application.

Why?
Security. Making application secure, defining roles and responsibilities of it’s users.
Obviously, every application has users [otherwise why would it exist?]. Each user has it’s own set of roles and responsibilities. As per those responsibilities, he can perform tasks or take actions which are laid down for him.

Have you ever seen a Software Developer doing a job of CA, Finance Head? Surely, it is not meant for a poor developer.
This is where Permission Levels are needed. These permission levels segregate permissions, clearly demarcating or creating a boundary for users what they are supposed to do and what they are not.
Continue reading Setting Custom Permission Levels in Sharepoint Programmatically

How to disable SSL TLS protocols in Springboot?


Often a requirement comes to secure the application as well as the connections made to that application.

Prior to TLS 1.2, many versions of SSL and TLS came into existence to enforce transport layer security. Those previous versions were vulnerable to some sort of attacks\threats and those were fixed in their next version.

In order to enforce security, you may just want to accept connections over TLS v1.2 and thus only enable TLSv1.2 while disabling all other versions- SSLv3, TLS 1.0, TLS 1.1 etc

The purpose of this article is to list down the steps required to enable only TLS 1.2 and disable all other versions in a Springboot Application.

PRE-REQUISITES

  • JRE
  • IDE of your choice
  • Springboot Application
  • Certificates – be it Self Signed or from Public CA

This article assumes that your application has already enabled SSL  in your application and configured certificates and secure HTTP Connectors either programmatically or through configuration.

HOW IT WORKS?

Before we look into the steps, lets first understand how things work. Basically, an application sets up a virtual host/container – Jetty or Tomcat or Undertow etc as well as HTTP Listener(s).

In a Springboot application, embedded containers can be setup using

EmbeddedServletContainerFactory

during bootstrapping.

For tomcat,

TomcatEmbeddedServletContainerFactory

is initialized and likewise.  These containers set up Connectors (HTTP) and configure connectors for

  • Port
  • URI Encoding
  • SSL Settings optionally
  • Compression optionally
  • Protocol Handler etc

HOW TO DISABLE SSL or  < TLS 1.2 ?

  1. In < Springboot v1.4.x versions

    For Springboot applications with versions < 1.4.x, there is not any support to disable protocols through configuration. APP YAML configuration has few properties to enable SSL but it does not provide a mechanism to set SSL enabled-protocols

    Thus, changes have to be done programmatically.

  But how?

  Do i need to initialize Tomcat Factory and Connector and stitch everything ?

Luckily, not. Springboot allows to customize the existing Container and further customize Connector.

Does that mean i just need to create Customizer and somehow attach it to the existing initialized container?

Yes, that’s right.

Add the below code and Your Problem will be solved. What we are doing is that during Service bootstrapping process, we are injecting a

EmbeddedServletContainerCustomizer

and

TomcatConnectorCustomizer

beans and this way Spring IoC Container will stitch them out for you.


@Bean
    public EmbeddedServletContainerCustomizer containerCustomizer(TomcatConnectorCustomizer connectorCustomizer) {
        return new EmbeddedServletContainerCustomizer() {
            public void customize(ConfigurableEmbeddedServletContainer container) {
                if (container instanceof TomcatEmbeddedServletContainerFactory) {
                    TomcatEmbeddedServletContainerFactory tomcat = (TomcatEmbeddedServletContainerFactory) container;
                    tomcat.addConnectorCustomizers(connectorCustomizer);
                }
            }
        };
    }

    /**
     * Sets up the Tomcat Connector Customizer to enable ONLY TLSv1.2
     * @return Reference to an instance of TomcatConnectorCustomizer
     */
    @Bean
    public TomcatConnectorCustomizer connectorCustomizer() {
        return new TomcatConnectorCustomizer {
        @Override
        public void customize(Connector connector) {
            connector.setAttribute("sslEnabledProtocols", "TLSv1.2");
        }
    }<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>;
    }

    1. In < Springboot v1.4.x versions

      For Springboot applications > 1.4.x, things have been made much simpler and can be done through YAML configuration.

server:
   ssl:
     enabled: true
     key-store: classpath:Keystore.jks
     key-store-password: <storepassword>
     key-password: <password>
     key-alias: <yourKeyAlias>
     enabled-Protocols: [TLSv1.2]
   port: 8443

enabled-Protocols: [TLSv1.2] is the trick here.

Simple. Isn’t it?

Pass Custom DateTime Zone in SQL Query Date Time Parameter | Hibernate


Using Hibernate and Struggling with querying DateTime Column in RDBMS (like MS-SQL) in specific timezone?
No matter what Timezone your DateTime object has, while issuing hibernate query,
do you observe that Time in Default Timezone of JVM is always getting passed and thus not giving you desired results?

If that’s the case, this article describes a process to achieve querying DateTime column with specific timezone.

WHY THIS HAPPENS?

It is because your Application Server and Database Server are running in Different TimeZones.

If your Application Server and Database Server are running in different TimeZones, we need to ensure that the Date Time query parameter values shall be sent as per DB Timezone to get desired results.

Let’s understand how does Hibernate\DB Driver forms a Sql Query in the next section.

HOW HIBERNATE CREATES A QUERY?

On an Application Server, DB Driver forms a Command before sending it to RDBMS. Database System then executes the query (may compile if needed) and return the results accordingly.

DB Driver instantiates a Command in the form of PreparedStatement object. Then, DBConnection is attached with the above Command Object on which this command will be executed. Since we want to query by certain parameters, DateTime in our case, DB Driver sets the query parameters on the command. 

PreparedStatement exposes few APIs to set different parameters depending upon the type of the parameter.
To pass DateTime information, various APIS being exposed are:

  • setDate
  • setTime
  • setTimestamp

All these functions allow passing Calendar object to be passed. Using this Calendar object, Driver constructs the SQL DateTime value.

If this Calendar object is not passed, Driver then uses the DEFAULT TIMEZONE of the JVM running the application. This is where things go wrong and desired results are not obtained.

How can we solve it then?

DIFFERENT APPROACHES

  1. Setting same timezone of the Application Server and of DB Server
  2. Setting timezone of the JVM as that of DB Server
  3. By extending the TimestampTypeDescriptor and AbstractSingleColumnStandardBasicType classes and attaching to the Driver

1st and 2nd Approaches are fine, however these can have side-effects.

1st can impact other applications which are running on the same system. Usually, 1 application runs on a single server in Production or LIVE environment, however, with this we are delimiting the deployment of other applications.

2nd approach is better than 1st one since it will not impact other applications, however, the caveat here is what if your application is talking to different DB Systems which are in different timezones. Or, what if you want to set TimeZone on only few selected Time Fields.

3rd approach is flexible. It allows you to represent different time fields in even different time zones.

AlRight. Can we have steps then to implement Approach #3

STEPS FOR 3rd Approach:

Provide Custom TimestampTypeDescriptor and AbstractSingleColumnStandardBasicType
  • Implement Descriptor class as given below:
    import java.sql.CallableStatement;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Timestamp;
    import java.util.Calendar;
    import java.util.TimeZone;
    
    import org.hibernate.type.descriptor.ValueBinder;
    import org.hibernate.type.descriptor.ValueExtractor;
    import org.hibernate.type.descriptor.WrapperOptions;
    import org.hibernate.type.descriptor.java.JavaTypeDescriptor;
    import org.hibernate.type.descriptor.sql.BasicBinder;
    import org.hibernate.type.descriptor.sql.BasicExtractor;
    import org.hibernate.type.descriptor.sql.TimestampTypeDescriptor;
    
    /**
     * Descriptor for {@link Types#TIMESTAMP TIMESTAMP} handling with zone.
     */
    public class CustomZonedTimestampDescriptor extends TimestampTypeDescriptor {
        public static final CustomZonedTimestampDescriptor PST_INSTANCE = new CustomZonedTimestampDescriptor();
    
        /**
         * Instantiate an object of CustomZonedTimestampDescriptor with Timezone set to "America/Los_Angeles"
         */
        public CustomZonedTimestampDescriptor() {
            this.calendar = Calendar.getInstance(TimeZone.getTimeZone("America/Los_Angeles"));
        }
    
        /**
         * Instantiate an object of CustomZonedTimestampDescriptor
         * @param zone Timezone to be used
         */
        public CustomZonedTimestampDescriptor(TimeZone zone) {
            this.calendar = Calendar.getInstance(zone);
        }
    
        /**
         * Get the binder (setting JDBC in-going parameter values) capable of handling values of the type described by the
         * passed descriptor.
         *
         * @param javaTypeDescriptor The descriptor describing the types of Java values to be bound
         *
         * @return The appropriate binder.
         */
        @Override
        public <X> ValueBinder<X> getBinder(final JavaTypeDescriptor<X> javaTypeDescriptor) {
            return new BasicBinder<X>( javaTypeDescriptor, this ) {
                @Override
                protected void doBind(PreparedStatement st, X value, int index, WrapperOptions options) throws
                        SQLException {
                    st.setTimestamp(index, javaTypeDescriptor.unwrap(value, Timestamp.class, options), calendar);
                }
            };
        }
    
        /**
         * Get the extractor (pulling out-going values from JDBC objects) capable of handling values of the type described
         * by the passed descriptor.
         *
         * @param javaTypeDescriptor The descriptor describing the types of Java values to be extracted
         *
         * @return The appropriate extractor
         */
        @Override
        public <X> ValueExtractor<X> getExtractor(final JavaTypeDescriptor<X> javaTypeDescriptor) {
            return new BasicExtractor<X>( javaTypeDescriptor, this ) {
                @Override
                protected X doExtract(ResultSet rs, String name, WrapperOptions options) throws SQLException {
                    return javaTypeDescriptor.wrap(rs.getTimestamp(name, calendar), options);
                }
    
                @Override
                protected X doExtract(CallableStatement statement, int index, WrapperOptions options) throws SQLException {
                    return javaTypeDescriptor.wrap(statement.getTimestamp(index, calendar), options);
                }
    
                @Override
                protected X doExtract(CallableStatement statement, String name, WrapperOptions options)
                        throws SQLException {
                    return javaTypeDescriptor.wrap(statement.getTimestamp(name, calendar), options);
                }
            };
        }
    
        private final Calendar calendar;
    }
    

    In the above code, Default constructor uses PST Timezone by default. For other TimeZones, simply use the Parameterized Constructor.

  • Implement Type class and use the above Descriptor class
    import com.expedia.www.air.commission.migration.db.descriptors.CustomZonedTimestampDescriptor;
    
    import java.util.Comparator;
    import java.util.Date;
    import java.util.TimeZone;
    
    import org.hibernate.dialect.Dialect;
    import org.hibernate.engine.spi.SessionImplementor;
    import org.hibernate.type.AbstractSingleColumnStandardBasicType;
    import org.hibernate.type.LiteralType;
    import org.hibernate.type.TimestampType;
    import org.hibernate.type.VersionType;
    import org.hibernate.type.descriptor.java.JdbcTimestampTypeDescriptor;
    
    /**
     * A type that maps between {@link java.sql.Types#TIMESTAMP TIMESTAMP} and {@link java.sql.Timestamp} with zone
     */
    public class CustomZonedTimestampType extends AbstractSingleColumnStandardBasicType<Date>
            implements VersionType<Date>, LiteralType<Date> {
        /**
         * Instantiate an object of CustomZonedTimestampType with Timezone set to "America/Los_Angeles"
         */
        public CustomZonedTimestampType() {
            this(CustomZonedTimestampDescriptor.PST_INSTANCE);
        }
    
        /**
         * Instantiate an object of CustomZonedTimestampType
         * @param zone Timezone to be used
         */
        public CustomZonedTimestampType(TimeZone zone) {
            super(new CustomZonedTimestampDescriptor(zone), JdbcTimestampTypeDescriptor.INSTANCE);
        }
    
        /**
         * Returns the abbreviated name of the type.
         * @return String the Hibernate type name
         */
        @Override
        public String getName() {
            return TimestampType.INSTANCE.getName();
        }
    
        /**
         * Convert the value into a string representation, suitable for embedding in an SQL statement as a
         * literal.
         * @param value The value to convert
         * @param dialect The SQL dialect
         * @return The value's string representation
         * @throws Exception Indicates an issue converting the value to literal string.
         */
        @Override
        public String objectToSQLString(Date value, Dialect dialect) throws Exception {
            return TimestampType.INSTANCE.objectToSQLString(value, dialect);
        }
    
        /**
         * Generate an initial version.
         * @param session The session from which this request originates.
         * @return an instance of the type
         */
        @Override
        public Date seed(SessionImplementor session) {
            return TimestampType.INSTANCE.seed(session);
        }
    
        /**
         * Increment the version.
         * @param current the current version
         * @param session The session from which this request originates.
         * @return an instance of the type
         */
        @Override
        public Date next(Date current, SessionImplementor session) {
            return TimestampType.INSTANCE.next(current, session);
        }
    
        /**
         * Get a comparator for version values.
         * @return The comparator to use to compare different version values.
         */
        @Override
        public Comparator<Date> getComparator() {
            return getJavaTypeDescriptor().getComparator();
        }
    }
    
  • Add an annotation @Type on the fields for which proper Timezone has to be used
    @Entity
    public class MyEntityClass implements Serializable {
        public static final String CUSTOMIZED_TIMESTAMP_TYPE = "com.db.types.CustomZonedTimestampType";
    
        public Date getUpdateDate() {
            return updateDate;
        }
    
        public void setUpdateDate(Date updateDate) {
            this.updateDate = updateDate;
        }
    
        @Type(type = CUSTOMIZED_TIMESTAMP_TYPE)
        private Date updateDate;
    }
    

As per the above code, MyEntityClass has a field named upDateDate for which we want to send the date and time in correct Timezone.

ADVANTAGES

  • Reliability: Expected Results from DB are obtained
  • No other application running on a same system is impacted
  • Most importantly, the above steps enable a code to be run on any system set in any TimeZone

← Back

Thank you for your response. ✨

 

 

Data Contracts, XSDs and Redundant List Wrappers – XEW Plugin to rescue


In Service Oriented Architecture (SOA) or MicroServices Architecture, data is exchanged between different components over the network.

Keeping in mind the INTEROPERABILITY, Data Contracts are created and shared.

Contracts either in the form of WSDL or XSDs etc are mutually agreed between the components to exchange the Structured data among them.

As part of these contracts, you may have a need to send a collection of similar data and for this purpose you may have defined different complexTypes in your xsd.

This article talks about problem associated with defining List Complex Types, how can we overcome this problem using XEW Plugin and the benefits.

Consider you want to exchange a list of AirSegments under an Itinerary like:


    

        

            
AC
            
12
        

        

            
AC
            
13
        

        

            
AC
            
189
        

    

 

To accomplish this, you will define Something like below:

<xs:complexType name="OriginDestinationBookedType">
    
 <xs:element name="AirSegmentBookedList" type="SegmentBookedListType"/>
    
  


<xs:complexType name="SegmentBookedListType">
    
 <xs:element maxOccurs="unbounded" name="AirSegmentBooked" type="SegmentBookedType"/>
    
  


  <xs:complexType name="SegmentBookedType">
    
      <xs:element name="CarrierCode" type="CarrierCodeType"/>
      <xs:element name="FlightNumber" type="FlightNumberType"/>
    
  

This looks good. Looks Good until we generate the Proxy classes out of these Contracts. Give it a try to generate the classes out of these XSDs using Plugins: JAXB-MAVEN, CXF etc.

You will notice that 3 proxy classes get generated.

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "OriginDestinationBookedType", propOrder = {
"segmentBookedList"
})
public class OriginDestinationBookedType {
@XmlElement(name = "SegmentBookedList", required = true)
protected SegmentBookedListType segmentBookedList;
public SegmentBookedListType getSegmentBookedList() {
return segmentBookedList;
}
public void setSegmentBookedList(SegmentBookedListType value) {
this.segmentBookedList = value;
}
}

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "SegmentBookedListType", propOrder = {
"segmentBookeds"
})
public class SegmentBookedListType {
@XmlElement(name = "SegmentBooked", required = true)
protected List segmentBookeds;
public List getSegmentBookeds() {
if (segmentBookeds == null) {
segmentBookeds = new ArrayList();
}
return this.segmentBookeds;
}
}

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "SegmentBookedType", propOrder = {
"carrierCode",
"flightNumber"
})
public class SegmentBookedType {
@XmlElement(name = "CarrierCode", required = true)
protected String carrierCode;
@XmlElement(name = "FlightNumber", required = true)
protected String flightNumber;
}

With the above classes, if you want to get an access to a Segment within an OD, you will have to write:

OriginDestinationBookedType od; // Initialized properly and you have a non-null reference
od.getSegmentBookedList().getSegmentBookeds().get(segIndex);

Bold part above is redundant and not needed for sure. Instead, we want to have:

od.getSegmentBookeds().get(segIndex);

How can we directly get a list of segments under an OD?

Solution
Integrate XEW Plugin into your repository and get it executed during Code generation phase.
Simply,

 

        org.jvnet.jaxb2.maven2
        maven-jaxb2-plugin
        0.13.1
        
            
                org.jvnet.jaxb2_commons
                jaxb2-basics
                0.6.3
            
        
        
            
                air-ticket-schema
                
                      generate
                
                
                    true
                    
                        -Xannotate
                        -Xxew
                        -Xxew:control ${basedir}/src/main/resources/xsds/xewInclusionExclusion.txt
                    
                    
                        
                            org.jvnet.jaxb2_commons
                            jaxb2-basics-annotate
                            1.0.2
                        
                        
                            com.github.jaxb-xew-plugin
                            jaxb-xew-plugin
                            1.9
                        
                        
                            com.sun.xml.bind
                            jaxb-xjc
                            2.2.11
                        
                    
                    
                        -Djavax.xml.accessExternalSchema=all
                    
                    ${basedir}/src/main/resources/xsds
                    
                        yourXSDsHere.xsd
                    
                    ${basedir}/target/generated-sources
                    ${basedir}/src/main/resources/xsds
                    
                        bindings.xjb
                    
                      false
                      false
                      true
                
              
        
    

 

With the above configuration, only 2 classes will be generated.

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "OriginDestinationBookedType", propOrder = {
"segmentBookedList"
})
public class OriginDestinationBookedType {
@XmlElement(name = "SegmentBookedList", required = true)
protected List segmentBookedList;
public List getSegmentBookedList() {
return segmentBookedList;
}
public void setSegmentBookedList(List value) {
this.segmentBookedList = value;
}
}

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "SegmentBookedType", propOrder = {
"carrierCode",
"flightNumber"
})
public class SegmentBookedType {
@XmlElement(name = "CarrierCode", required = true)
protected String carrierCode;
@XmlElement(name = "FlightNumber", required = true)
protected String flightNumber;
}

And you are all set. No cursing on XSDs :)

ADVANTAGES

    No more List wrapper classes.No more additional clumsy code
    No redundant Null checks
    More readability
    Less Machine Instructions to execute
    Less Memory Footprint of Virtual Functions Table
    More maintainability


← Back

Thank you for your response. ✨

Decompiling Powershell CmdLet Code


Many of us are involved in writing scripts, be it for development or testing or deployment.
We make use of different scripting languages. One of them is Powershell.
As the name suggests, it’s really powerful.

You can accomplish so many things in Powershell. But what if you already have something developed in .NET and have an Assembly (remember *.dll file) available with you.

Would you like to mimmick everything in Powershell? Or, would you wish if same .NET assembly can be reused?

I fall in latter category wherever possible. :)
Yes, You can reuse .NET library.
Aahhh!!! Great!!! Sounds interesting!!!!

Many of us are aware of this and may be, few of us are not.

Why am i writing this?
I was working on Automating or writing a workflow to deploy Virtual Machines (aka, Persistent VM Role) on Microsoft Azure Cloud.
I did it using Powershell script(You can see a lot of support and sample Powershell scripts already available on MS community sites).

That became simple. However, that’s not all for me.

I am hungry :), Hunger to understand things, go till the roots.

I wanted to understand the code working behind the scenes.

Read this post further…

What are Powershell CmdLets?
In actual, Powershell cmdlets are actually exposed through .NET assemblies only. Bunch of assemblies targetting .NET framework execute to get as results which we want.

If you have worked in .NET, you would have came across Attributes. Yeah, that is how Powershell CMDLETS are exposed.

Classes and fields/parameters are attributed with CmdLet and Parameter.
That’s it. Powershell execution engine can now load these types and execute them.

Bottomline is: Cmdlets are classes annotated with Cmdlet attribute.

How to decompile?
Now, we know that it’s actually a .NET Type in .NET assembly which is getting things done. We all know how to decompile .NET assembly.
We may use 3rd Party Tools, some are free while some are not.
This is not a big deal.

However, how do you identify and locate the Assembly containing this specific CMDLET?

You may say that you are not CLR which is responsible for locating, loading and executing the types besides other things.

Then HOW, you may ask.

For this, we’ll again make use of Powershell Command Prompt.

Open up Command Prompt and execute the following command:

$commandDetailsObj=Get-Command nameOfCommand

/* where,
$commandDetailsObj is how you declare a variable in Powershell,
Get-Command is another Powershell cmdlet, gcm is an alias of this cmdlet,
and, 
nameOfCommand is the name of cmdlet which you want to decompile. Say, Add-AzureAccount
*/

The above command will get the details about the cmdlet and store it in $commandDetailsObj variable.
Since cmdlet name can actually be an alias to an actual cmdlet, we keep on doing the below till we get the actual command.


Microsoft (CPS) IN

while($commandDetailsObj.CommandType -eq "Alias")
{
$commandDetailsObj = Get-Command ($commandDetailsObj.definition)  
}
<br />

Next is, we want to get the Type exposed as CMDLET. Issue the following command:

$commandDetailsObj.ImplementingType

The above command after executing will print the Full Qualified Class Name in the console.

Next is, we want to get the Assembly(DLL) name containing the Type exposed as CMDLET. Issue the following command:

$commandDetailsObj.DLL
<br />

The above command after executing will print the Full path of the Assembly in the console.

With the above information, we can now open this dll in any of the .NET Decompilation Tool to view the code.

This article doesn’t tell what you want? Need help? Contact me.

← Back

Thank you for your response. ✨

Activation error occured while trying to get instance of type Database, key “” | EntLib


The title of this post may sound a bit strange for those who have not faced this problem but it may sound a Sweet Tune Music :) to those who want to resolve this nasty error in their application.

If you fall into the latter category, you can directly jump to the Solution section though everybody is definitely welcomed to read the entire post.

What is this about?
An error which occurs when using Enterprise Library Data Access Block in instantiating a Database using factory approach.
You may have followed the msdn article to setup DataAccessBlock with the correct code and the configuration in your application but always resulting into the error when you try to instantiate a database object.

Context
Typically, software solutions are multi-layered. One of them being a Data Access Layer, aka DAL, which interacts with the Data Store(s) and performs the CRUD operations on the data in the data store. In this layer, you can either opt for ADO.Net or Enterprise Library Data Access Block to connect to Data Store (database) besides other options.

Since, the post is talking about a specific error resulted in the EntLib, lets assume that we preferred to implement DAL using EntLib Data Access Block.

Problem / Error
Activation error occured while trying to get instance of type Database, key “”
This error occurs on the below code statement, the very first statement to perform the CRUD operation into the DataStore.

Database dataStore = DatabaseFactory.CreateDatabase();

or,

Database dataStore = DatabaseFactory.CreateDatabase("someKey");

Cause
Enterprise library consists of number of classes in different namespaces and assemblies.
Two of them are:

  1. Microsoft.Practices.EnterpriseLibrary.Data
  2. Microsoft.Practices.EnterpriseLibrary.Common

The above code statement is present in the former assembly. After a series of function calls in the same assembly and the latter assembly, a function in the latter assembly tries to load the former assembly using the partial name.

Note: Loading of an assembly using Partial name
This is what leads to the error if the Enterprise libraries assemblies are GACed and not copied locally into the application directory.
Assembly with a partial name won’t be found in the GAC and then the search/probing of an assembly will continue to Local Application Directory or sub-directories with the same name or as per configuration.
Since assembly is not present anywhere else except GAC, assembly loading will fail and leading to this error.

You can see this in action by launching Fusion Log Viewer utility, which comes by default. Command is : “fuslogvw” in case yof could not locate the utility. Type the command in the Visual Studio Command Prompt.
You may need to customize the Log Viewer to log all binding to disks to view every log.

[You can opt to open this assembly into a Reflector or ILSpy and go through each code statement and function call post the above code statement to understand more.]

So, is there a solution or a workaround for the above problem?

Resolution
This problem is solvable. :)
Problem can be solved in many ways, you choose what suits you the best.

  1. You can deploy the enterprise library, “Microsoft.Practices.EnterpriseLibrary.Data” locally to the applcation bin directory. [This may lead to maintaining multiple copies of the same assembly]
  2. Another option is to have the below configuration in the application configuration file. This appears a bit clean approach but again the same configuration change has to be done at multiple places if they are using this library

    <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
    <qualifyAssembly partialName="Microsoft.Practices.EnterpriseLibrary.Data" fullName="Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
    </assemblyBinding>
    </runtime>

    Thanks to Mr. Philippe for this 2nd solution posted @ CodePlex.

DeterminantOfMatrix


4. Determinant of a 2D matrix

Problem:

Given a 2D matrix, Determine it’s Determinant.

Solution:

This implementation is done using C#.NET. Rectangular Matrix is declared using int[,] syntax.

public static long EvaluateDeterminant(int[,] matrix)
{
long determinant = 0;

if (matrix == null || matrix.GetUpperBound(0) != matrix.GetUpperBound(1))
{
Console.WriteLine("Non-square matrix can't have a determinant");

return determinant;
}

int row_UB = matrix.GetUpperBound(0);

determinant = Determinant(matrix, row_UB + 1);

return determinant;
}

private static long Determinant(int[,] matrix, int size)
{
long determinant = 0;

if (size == 1) // 1x1 MAtrix
{
determinant = matrix[0, 0];
}
else if (size == 2) // 2x2 MAtrix
{
determinant = matrix[0, 0] * matrix[1, 1] - matrix[0, 1] * matrix[1, 0]; // can hash this multiplication
}
else
{
int multiplier = 1;

for (int i = 0; i < size; i++)
{
multiplier = (i % 2 == 0) ? 1 : -1;

determinant += multiplier * matrix[0, i] * Determinant(GetMinor(matrix, size, 0, i), size - 1);
}
}

return determinant;
}

/// <summary>
/// Gets the Minor of a Square Matrix
/// </summary>
/// <param name="matrix"></param>
/// <param name="size"></param>
/// <param name="rowIndex"></param>
/// <param name="colIndex"></param>
/// <returns></returns>
/// <remarks>
/// If function has to be Public, Certain checks on rowIndex, ColIndex should be made and size need not to be passed
/// </remarks>

private static int[,] GetMinor(int[,] matrix, int size, int rowIndex, int colIndex)
{
int minorSize = size - 1;
int[,] minor = new int[minorSize, minorSize];

for (int i = 0; i < rowIndex; i++)
{
for (int j = 0; j < colIndex; j++)
{
minor[i, j] = matrix[i, j];
}
}

for (int i = rowIndex + 1; i < size; i++)
{
for (int j = 0; j < colIndex; j++)
{
minor[i - 1, j] = matrix[i, j];
}
}

for (int i = 0; i < rowIndex; i++)
{
for (int j = colIndex + 1; j < size; j++)
{
minor[i, j - 1] = matrix[i, j];
}
}

for (int i = rowIndex + 1; i < size; i++)
{
for (int j = colIndex + 1; j < size; j++)
{
minor[i - 1, j - 1] = matrix[i, j];
}
}

return minor;
}

Jaagrugta Failao – By Sunil Singhal