skip to Main Content

Could we have a {docker run} command without a entry point?
As far as I can see, there must be an entry point (if not specified, it should be defaulted to "bash" at most cases). Otherwise, I cannot have an image running as an container.

Let say I install Linux at my physical machine.
There is a background service running at background, listening on a port and then doing something.
Before I login, there is no bash shell running and I can still call the background service from another machine by the name and port.

Can I do something similar by {docker run}? That mean, I do not need an entry point (or the entry point is a system process instead of bash?) and just let the container system together with its background services up and running.

2

Answers


  1. You don’t need any entry point to run the docker image. you can run the image following the command. If your dockerfile contains entry point by default then remove it.

    docker run image_name
    

    Also share the docker file here, which might help people to answer better of your question.

    Login or Signup to reply.
  2. If you docker run a container without any special entrypoint or command options, it runs the single command specified by its image’s ENTRYPOINT and CMD directives. This should be the usual way you run a container. For example:

    # Launches a PostgreSQL server, without manually specifying a command
    docker run 
      -d -p 5432:5432 -v pgdata:/var/lib/postgresql/data 
      postgres:14
    

    Anything you put after the image name overrides the CMD in the image. You can use this for various sorts of useful debugging, or to run multiple containers off the same code base.

    docker build -t django-image .
    
    # What's in that image?
    docker run --rm django-image 
      ls -l /app
    
    # Get an interactive shell in a temporary container
    docker run --rm -it django-image 
      bash
    
    # Launch the Django server normally
    docker run -d --name web -p 8000:8000 django-image
    
    # Also launch a Celery worker off that same image
    docker run -d --name worker django-image 
      celery worker
    

    You should not normally need the docker run --entrypoint option. Since it is a Docker option, it needs to appear before the image name, and any options it takes appear in the "command" slot after the image name.

    # This syntax is awkward, design to avoid it
    docker run --rm 
      --entrypoint ls 
      django-image 
      -l /app
    

    In your Dockerfile ENTRYPOINT is totally optional. Prefer setting the CMD and not ENTRYPOINT if you think you’ll ever need any of the command-override forms suggested above, including debugging during initial development.

    # no ENTRYPOINT
    CMD python ./manage.py runserver 0.0.0.0:8000
    

    Do not set ENTRYPOINT to an interpreter like python and put the rest of the command in CMD; this apparently works but limits commands to only things that are Python scripts, so you’re forced into the awkward docker run --entrypoint setup for anything else.

    If you do set ENTRYPOINT, the setup I find most useful is to have it be a wrapper shell script that does some initial setup, then ends with exec "$@" to run whatever the CMD is. This works even with the various command-override forms, which still run the wrapper script but then run the user-provided command at the end. The Dockerfile ENTRYPOINT line must be a JSON array ("exec syntax") for this to work.

    My one exception to this is when building micro-images for Go programs that literally contain nothing besides the compiled program itself. There is neither a shell nor any other tools in the image, and it’s impossible to do anything besides run the one program. In that case only, I’ll set ENTRYPOINT to the path to the compiled binary; it still must be a JSON array.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search