skip to Main Content

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.

We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).

So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).

The client’s initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 … which is not resolvable by the client. So far, we’ve addressed this by adding "server hostname " to the client’s /etc/hosts file but this is a pain to maintain.

I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I’ve seen where a docker container running on the same docker network as the server had issues connecting to the server.

I realize this is not an ideal docker setup. We’re migrating from a history of delivering as rpm’s to delivering containers .. but it’s a slow process. Our company has lots of applications.

I’m really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I’m guessing it is the /etc/hosts we’re already doing)

2

Answers


  1. You can do port-mapping -p 8080:80

    How you build and run your container?

    With a shell command, dockerfile or yml file?

    Check this:
    docker port

    Call this and it will work:

    [SERVERIP][PORT FROM DOCKERHOST]

    To work with hostnames you need DNS or use hosts file.

    The hosts file solution is not a good idea, it’s how the internet starts in the past ^^
    If something change you have to change all hosts files on every client!

    Or use a static ip for your container:

    docker network ls
    
    docker network create my-network
    
    docker network create --subnet=172.18.0.0/16 mynet123
    
    docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
    

    Assign static IP to Docker container

    Login or Signup to reply.
  2. You’re describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.

    But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.

    Now implementing the database is the hardest part of this, there are a bunch of different ways, and you’ll need to spend some time with your team to figure out what is the best way.

    Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that’s a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search