skip to Main Content

We have a user facing web app powered by a SQL Server that allows users top update a table in our SQL Server that also needs to update a document record in our dynamo database table.

How could I reliably ensure that both commits have taken place? We can allow up to a few seconds in latency.

2

Answers


  1. Short answer, you can’t.

    Dynamo DB doesn’t support two-phase (aka distributed) commitment control like most (all?) relational databases do.

    Long answer, given that once DDB returns a successful (2xx) response, the record is durable you might consider

    • start transaction
    • write to SQL table
    • write to DDB
    • if DDB returns 200, commit SQL transaction
    • else: rollback SQL server transaction.

    Another thought, would be to take advantage of DDB streams. Have your app write just to DDB and have another application (Lambda) pick up the change and write to SQL Server.

    The first option is "easier", but less robust. No guarantees that something couldn’t go wrong (app crashing) between the request to DDB and your app seeing the response. Thus rolling back the SQL while the DDB is updated.

    The second option is more work, basically you’re building (buying?) a data replication engine from DDB to SQL server. But since the DDB stream data lives for 24hrs you’ve got that long to fix any problems in SQL server and pickup where you left off.

    Login or Signup to reply.
  2. Why not just leverage the SQL transaction (because dynamo transactions are harder to work with)?

    <begin trx>
    save item in SQL db
    save item in Dynamo DB
    <commit trx>
    
    • If save in SQL fails it will be rolled back
    • If SQL save succeeds but dynamo save fails, you get an exception and rollback
    • If both succeed you commit

    Am I missing something

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search