Hazelcast Cache Manager

Introduction

The cache can be defined as a high-speed data storage layer that temporarily stores information. In this way, in future requests, the data can be served more quickly and easily, increasing the process speed of running programs.

With the use of caches you get a number of advantages:

  • Improved application performance.

  • Cost reduction. With traditional databases and disk-based hardware, increasing resources does not imply that improvements in latency will be obtained as a cache usually does.

  • Reduction of the load on the back end.

  • Increased reading performance, reduced latency and much higher request speeds compared to a disk-based database.

This example explains how to use the Cache Administration functionality where to store different types of data such as results of database queries, requests and responses of APIs, web pages, user session information, among others.

This service uses Hazelcast, a distributed storage system, which helps manage data and distribute the processing using memory storage and parallel execution. Hazelcast’s distributed architecture provides redundancy for continuous cluster up-time and always available data to serve the most demanding applications. Capacity grows elastically with demand, without compromising performance or availability.

One of the uses that can be given to the management of caches is to make the data of a file or a database available from the API. Simply reading the data and loading it into the cache would allow access to it with the different endpoints.

Creating a Cache

For this, it is necessary to use a profile with administrator role and go to the menu: Administration > Cache Management.

Once in the Cache Management screen, click on "Create":

To create a new cache it is necessary to indicate the name, type, maximum size policy, size and eviction policy.

Currently, only the map type cache is supported, whose maximum size policies can be:

  • PER_NODE: Maximum number of map entries in each cluster member. This is the default policy.

  • PER_PARTITION: Maximum number of map entries within each partition.

  • USED_HEAP_SIZE: Maximum used heap size in megabytes per map for each instance.

  • USED_HEAP_PERCENTAGE: Maximum used heap size percentage per map for each instance

  • FREE_HEAP_SIZE: Minimum free heap size in megabytes for each JVM.

  • FREE_HEAP_PERCENTAGE: Minimum free heap size percentage for each JVM.

  • USED_NATIVE_MEMORY_SIZE: Maximum used native memory size in megabytes per map for each instance.

  • USED_NATIVE_MEMORY_PERCENTAGE: Maximum used native memory size percentage per map for each instance.

  • FREE_NATIVE_MEMORY_SIZE: Minimum free native memory size in megabytes for each

  • instance.

  • FREE_NATIVE_MEMORY_PERCENTAGE: Minimum free native memory size percentage for each instance.

And whose eviction policies are:

  • LRU: Least Recently Used.

  • LFU: Least Frequently Used.

  • NONE: If set, no items are evicted and the property max-size described below is ignored.

  • RANDOM.

REST API

To work with the cache it is necessary to know the specific REST API for it. It can be accessed from the APIs section of the platform (https://lab.onesaitplatform.com/controlpanel/swagger-ui.html).

Once the cache is created, information can be inserted into it with the last two endpoints, by entering the desired key-value pairs. In this case we will enter several values ​​at the same time for the user cache.

Subsequently, the application can query the data with the obtaining operations, indicating the cache identifier and the keys of the desired values. If we want to check which users have registered a specific day, we will enter the day key:

The answer that will return the request is the following:

If you want to know all the key values ​​or only one, you must use the get methods of the API, getAll and get/{Key} respectively.

In this way you can have control of the cached data, updating and consulting them.

To clear the cache, you only need to know its identifier, and all data will be deleted.