Skip to main content
Skip table of contents

API Reliability

API Reliability involves making sure that the API has maximum uptime and that it can scale as per requirements to meet the requests demand. In Fiorano API Management, this can be done using the features below.

High Availability (HA)

Server Layer High Availability

Fiorano High Availability is a feature that is inherently present in all Fiorano servers. HA can be of two types, active-active and active-passive. When a server fails in Fiorano's HA environment, there is already a pre-configured backup server that takes over all the workload of the server which failed. 

For an active-active HA, two or more API Gateway servers are always active and running. In this scenario, the API project is configured to run on a server group that has all the similar active gateway servers added to the same group.  Once configured the project is by default deployed on the group making sure all the servers part of this group have the API project deployed. Any server in this group can then be treated as active and the others as backup.

For an active-passive HA, Configuring High Availability Servers tells us how to configure the server in HA active-passive, where one gateway is running as active and another is running in passive. The passive gateway automatically starts and deploys the requisite API Projects as soon as the active server fails.

In an HA scenario for very robust high availability, multiple active-passive HA servers can also be configured as a server group for having both active-passive and active-active HA.

Containerization based HA

Fiorano applications can be completely containerized into docker images. Containerization of Fiorano Application tells us how to create a docker image out of Fiorano and deploy it into a Docker or any container management environment. Once deployed, the container re-spawning or auto-creation can directly be leveraged to provide high availability of the application services.

Vertical and Horizontal Scaling of APIs

Vertical Scaling

Vertical scaling of Fiorano services is easily achieved by increasing RAM, CPU, and Disk I/O of the hardware system, once this is achieved the application server will automatically be able to handle more requests per server per second based on the configuration. 

This is a hardware layer change and may require a restart of servers depending on the hardware layer. Virtual servers can undergo change without restarts as well at times.

Horizontal Scaling

Fiorano API Management has an inbuilt horizontal scaling feature, where to achieve this, a new API gateway is deployed, and then its configured to run as part of the same server group hosting the API Project. Doing this configuration by default scales and deploys the API Project onto the newly added server. Horizontal scaling can also be done using container management tools, containerized Fiorano applications can be directly scaled using replicas and load balancer service, i.e. Deployment of the API Gateway can be set to have n number of replicas as per need and a load balancer service can be used to access each of the replicas based on load balancing algorithm in play.

Kubernetes also provides a feature of auto-scaling based on requirements using Horizontal Pod Auto-scaler (HPA) based on memory or CPU usage. HPA auto-scales the services up or down based on the need of the setup and CPU or memory limits configured.


JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.