skip to Main Content

I usually work on projects with microservices and I am fighting issues with performance using my local laptop with a docker center, for now, I have created a remote environment in AWS where I have my docker there and I am running all containers when making tests. I am using Nodemon to watch my file change and rebuild the artifact, but how it is happening remotely and my code change is local, I need a suggestion on how I can keep the files synced in both environments, LOCAL to REMOTE. I stay connected remotely using ssh. I have tried to mount the remote folder in my local device using sshfs but it is not working how I need. I continue with a trade-off. Please let me know what path I need to take.

3

Answers


  1. I’ve used lsyncd for this. It monitors your local file system for changes and then uses rsync via ssh to mirror the changes to a remote machine.

    lsyncd -rsyncssh /path/to/local/src remotehost.org /path/to/remote/src
    

    You can also create more complex config files that have file exclusions and tweak the response times and such.

    Login or Signup to reply.
  2. One possible solution to keep your files synced between your local and remote environments is to use a version control system like Git. You can create a Git repository for your project and push your changes to the remote repository on AWS. Then, you can pull the changes on the remote environment using Git and rebuild the artifact.

    Here are the steps you can follow:

    1. Set up a Git repository for your project: Initialize a Git repository on your local machine and commit your initial code changes. Then, create a remote repository on a Git hosting service like GitHub or Bitbucket.

    2. Push changes to the remote repository: Push your code changes to the remote repository using Git. You can use Git commands like git add , git commit , and git push to do this.

    3. Pull changes on the remote environment: Connect to your remote environment using SSH and navigate to the project directory. Then, use Git commands like git pull to pull the changes from the remote repository.

    4. Rebuild the artifact: Once you have pulled the changes, you can rebuild the artifact using Nodemon or any other build tool you are using.

    5. Repeat the process: Whenever you make changes to your local code, commit the changes to the Git repository and push them to the remote repository. Then, pull the changes on the remote environment and rebuild the artifact.

    This approach will ensure that your code changes are synced between your local and remote environments, and you can easily rebuild the artifact on the remote environment whenever you make changes.

    Login or Signup to reply.
  3. I do this sort of test in three stages.

    For the first stage, don’t involve Docker at all. Use Node, Nodemon, and whatever other local tools to do development as normal. In your test suite, use mocks and similar techniques so that you can verify your application behavior without actually being able to connect to other containers. If you need to do manual testing in this environment, you can configure your application to talk to the remote server, or use a port-forward (ssh -L, kubectl port-forward).

    For the second stage, docker build an image. Run that image locally, without any sort of volume mounts and without using Nodemon. You might again be able to configure the container to talk to the remote system, or you might be able to use a tool like Docker Compose to run the entire application stack locally. Run a set of integration tests against this local-but-containerized environment.

    If that works, commit and push your branch to source control and have your CI system build an official image.

    Now on the target system, you (or your CI system) need to change the image tag to what your CI built, and delete and recreate the container. Pulling the image will include the new code directly in the image. This is the same way you’ll deploy the application, so this gives you pre-deployment environment to do full-system tests.

    None of these steps involve Docker mounts; when Docker is involved, the code is exactly the code in the image. There is no filesystem synching or remote connections between machines. This same fundamental approach works with any language, even compiled languages where the running application doesn’t usually include source code.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search