skip to Main Content

I am going to be using aws elasticache Redis for my codeigniter 3 application. I get a fair amount of traffic and am wondering if there is anything I need to be on the lookout for in terms of setup? I get 1700 requets a minute at peek and would be using this for php sessions. I am wonder what elastic cache instance size will work (AWS)

I am moving away from database-backed sessions as that is causing issues with GET_LOCK causing a lot of connections to pile up.

Based on my initial testing it seems to work great and fast. I did a query of the size of sessions tables (Multiple app instances) and I had 100MB in session data.

4

Answers


  1. AWS has a document which covers that –
    https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/nodes-select-size.html

    It really depends on the amount of data you have (try to estimate your growth as well), is your application is write-heavy and so on.
    I would recommend to go over the “Configure Amazon ElastiCache for Redis for higher availability” document as well and take it into your consideration.

    Login or Signup to reply.
  2. Interesting question. I have managed something close with NGINX just 2GB RAM. The first thing I would advise is to implement a lot of cache logic if possible. You can use redis both as cache and session driver, but it’s really important to ensure those 1700 connections are not all hitting the DB unless necessary.

    Secondly, you need to ensure your server and DB is configured to manage that amount of connections properly. There are three particularly useful resources for this: https://gist.github.com/denji/8359866 (Also useful to read https://www.nginx.com/blog/tuning-nginx/), MySqlTuner and PHP FPM High Performance Tuning.

    Also from experience, running FPM on a port is more reliable than running it in the socket due to unix kernel limitations (which can be tweaked for improvement).

    Finally, I’d also add cloudflare to prevent from basic ddos attacks and such and know you just have to worry only about real users.

    Login or Signup to reply.
  3. I did something similar with tomcat / java based app hosted in aws where we moved away from DynamoDB based session management to AWS managed memcached. Below are some recommendations I could offer based on my experience.

    1. Setup Automatic failover:
      Since DynamoDB was a managed service we didn’t have to worry about failover / redundancy. But with managed elasticache you have enable redundancy and multi-az replication as it is not there by default. If you want to maintain a seamless user experience when your redis node fails over, you make sure you enable Multi-AZ with automatic failover on your Redis cluster.

    2. Choosing the instance size

      Reads: You get 1700 request per minute at peak time. Let’s say during peak second you get around 1700 / 60 = ~30 requests per second, which is the minimum read throughput the selected elasticache instance should be able to handle.

      Writes:
      You haven’t mentioned any information on how many new users are logging in but I think it’s safe to assume it’s not going to nearly as high as reads. Writes can be managed easily if you are flexible at managing the session length. For now it’s safe to assume it is going to be less than read throughput.

      Which means you can get away with using cache.t3.small (with Multi-AZ with automatic failover) which has 2vCPUs and 1.37GB sufficient enough to accommodate your throughput and 100MB session storage needs.

    3. Selecting correct cache rotation strategy:
      You want to know if the cache is correctly rotated and what happens to the user performance when the elasticache saturates. Make sure to add correct TTL, which matches your session length.

    4. Know the limits and bottlenecks:
      Make sure you load test the app before pushing this change live. a) Understand what current login requests your app can handle as-is vs after moving to elasticache. Have a upgrade strategy ready in case the traffic grows. b) verify end-user impact when happens when the cache expires c) verify the impact on new logins when the cache is full in elasticache

    5. Setup monitoring and alerts: setup monitoring for at least for elasticache fails over signal and when elasticache latency starts goes beyond an acceptable threshold.

    Login or Signup to reply.
  4. Adding to the excellent answer from Parth, there are a few other things to watch for.

    1. Understand the implications of your multi-AZ configurations. AWS charges for cross-AZ traffic, and I have seen bills where the ElastiCache replication traffic becomes a significant factor. At the very least, make sure that your choosen AZs match across all parts of your stack – in us-east-1 you have at least 5 AZs though you probably don’t need to use all of them.

    2. When choosing instance size, understand the network restrictions as well as memory/CPU. Smaller instances have slower network throughput, which can become a bottleneck at scale. This applies to both the EC2 instances and the Redis nodes. Also, You can configure your ALB to support cross-AZ balancing, but understand the implications of doing so.

    Update:

    So if you are using 4 AZs, here’s what you’ll be looking at for cross-AZ traffic. My assumption is 1 instance per AZ, and a primary redis node with 3 replicas.

    Each time an item is written to redis, it will get copied 3 times, one to each slave.

    In addition, each time an EC2 instance writes to redis, there is a 75% chance that will incur a cross-AZ charge.

    If you instead had 2 instances across 2 AZs, and a single redis slave, you would only have a 50% chance of incurring a cross AZ charge from EC2-redis, and only a single charge for redis replication.

    Now, if you don’t have a lot of data in sessions, this may not be a huge charge, but as you scale up it’s worth watching.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search