i am trying to deploy my Go app with Alpine in docker, I was able to use it on my Mac and then going to Production with Centos 8 got issues
here is my Dockerfile:
FROM golang:alpine
RUN apk add --no-cache postgresql
RUN apk update && apk add --no-cache gcc && apk add --no-cache libc-dev && apk add --no-cache --update make
# Set the current working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. they will be cached of the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the WORKDIR inisde the container
COPY . .
# Build the Go app
RUN go build .
RUN rm -rf /usr/local/var/postgres/postmaster.pid
// this commands below like "psql -c ;'DROP DATABASE IF EXISTS prod'"
// "psql -c ;'CREATE USER prod'"
RUN make setup
# Exporse port 3000 or 8000 to the outisde world
EXPOSE 3000..
CMD ["make", "run" ]
then i got error:
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
on my make setup
i do the migration, create user, database
can make SUPERUSER on psql for that alpine also??
what u can see on the above syntax, is there any wrong and how to correct it? I have stuck from yesterday
2
Answers
Delete your original docker file’s from 8th line to 20th and add these.
If your folder structure like this :
You cannot run database commands in a Dockerfile.
By analogy, consider the
go generate
command: you can embed special comments in your Go source code that ask the Go compiler to run programs for you, typically to generate other source files. Say you//go:generate: psql ...
in your source code and rungo generate ... && go install .
Now you run that compiled binary on a different system. Since you’re not pointing at the same database any more, the database setup is lost.In the same way, a Dockerfile produces a compiled artifact (in this case the Docker image) and it needs to run independently of its host environment. In your example you could
docker push
the image you built on MacOS to a registry, anddocker run
it from the CentOS host without rebuilding it (and that’s probably better practice for a production system).For the specific commands you show in the question, you could put them in a database container’s
/docker-entrypoint-initdb.d
directory, or otherwise just run them once pointing at your database. For more general-purpose database setup you might look at running a database migration tool at application startup, either in your program’smain()
function or in a wrapper entrypoint script.