skip to Main Content

We have a custom-built CRM system that’s deployed on AWS RDS. Our FE and BE systems are JavaScript, and the database prohibits the JS devs from directly manipulating the database. They must go through predefined APIs or a controlled query system. It works beautifully and the devs have never managed to hurt the database 🙂

UPDATE: The controlled query system enforces JSON constructs as the only method the front-end and back-end systems have to update, insert and delete records. The Database monitors the JSON constructs and processes them accordingly, which includes stripping and denying data and requests that don’t meet specific guidelines.

The Database has its own internal error reporting system that has worked very well, mainly because the JS devs often don’t report errors from stored procedure calls that fail (grr). So the Database reports these errors in a tracking table. That way we can see what bad payloads the JS devs are sending or catch the occasional error in our SQL.

UPDATE: The internal error reporting system is exactly that; any time a stored procedure triggers an unhandled error, it writes the error out to an error table that is monitors by the Database team. It exists at the core database level and has nothing to do with the query control system.

The problem is we’ve deployed a Read Replica of our database, and now the internal error reporting system is rendered useless on RR calls that fail.

Is there a technique or method where the RR can somehow send write-out calls to the master? Or do I have to switch over to a log file system? If I have to do that, it pretty much requires us to abandon our master error table tracking…

So I guess the core question is: What internal MySQL error reporting system and technique should we consider for an enterprise-scale CRM database that utilizes Read Replicas?

Thanks!

2

Answers


  1. The RDS read replica cannot send a query to the source instance.

    What internal MySQL error reporting system and technique should we consider for an enterprise-scale CRM database that utilizes Read Replicas?

    Detect and report invalid payloads in the back-end code that processes requests, not in stored procedures.

    Your code can log the bad requests to whichever instance you want. I mean, you could log an error to the source instance even though the request (once validated) would have resulted in executing a stored procedure on the read replica instance.

    But I would recommend logging bad requests to some other logging system, not to the database. The bad requests do not relate to any other data in the database, and it just adds more storage load and query load to be logging bad requests in the database.

    AWS currently provides a service called Centralized Logging with OpenSearch. This sounds like a likely candidate to which to send reports of bad requests, but I have not used this AWS service so I can’t report any experience with it. It’s just an example of the kind of thing I’m talking about.

    Logging systems are generally simpler and less expensive than using a full-blown RDBMS for that task.

    Login or Signup to reply.
  2. The general direction is what Bill said in his answer. I’ll just add some AWS-specific flavor.

    In AWS, the go-to place for error logging is CloudWatch. It’s super easy to set up and use, and dirt cheap compared to storing your error logs in RDS.

    The flow is like this:

    1. Create a log group. It’s done once, and the expectation that it lives for as long as your application is in business (hopefully years). It’s an analog of your error table.

    2. Once an instance of your back-end application starts, create a log stream. Put the code to do that into the application startup code.

    3. Wrap your validation logic and database calls into try-catch blocks. If a validator or a database call throws an exception, put an event with the exception description into the log stream.

    There is a nice library called winston, with a CloudWatch plugin, that can handle all this for you.

    CloudWatch has some rudimentary analytic and search capabilities, which might be enough for your use case. If it’s not, you can use a plethora of services like Quicksight (AWS native); Kibana, Splunk, NewRelic (third-party) etc, all of which integrate easily with CloudWatch.

    Describing them in detail is beyond the scope of this answer, but just about every single one of them is more convenient and operationally cheaper than storing the log data in RDS.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search