skip to Main Content

I’m following https://github.com/PacktPublishing/Apache-Kafka-Series—Kafka-Connect-Hands-on-Learning and I’ve below docker-compose file and using Mac.

version: '2'

services:
  # this is our kafka cluster.
  kafka-cluster:
    image: landoop/fast-data-dev:cp3.3.0
    environment:
      ADV_HOST: localhost         # Change to 192.168.99.100 if using Docker Toolbox
      RUNTESTS: 0                 # Disable Running tests so the cluster starts faster
    ports:
      - 2181:2181                 # Zookeeper
      - 3030:3030                 # Landoop UI
      - 8081-8083:8081-8083       # REST Proxy, Schema Registry, Kafka Connect ports
      - 9581-9585:9581-9585       # JMX Ports
      - 9092:9092                 # Kafka Broker

and when I run

docker-compose up kafka-cluster
[+] Running 1/0
 ⠿ Container code-kafka-cluster-1  Created                                                       0.0s
Attaching to code-kafka-cluster-1
code-kafka-cluster-1  | Setting advertised host to 127.0.0.1.
code-kafka-cluster-1  | runtime: failed to create new OS thread (have 2 already; errno=22)
code-kafka-cluster-1  | fatal error: newosproc
code-kafka-cluster-1  | 
code-kafka-cluster-1  | runtime stack:
code-kafka-cluster-1  | runtime.throw(0x512269, 0x9)
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/panic.go:566 +0x95
code-kafka-cluster-1  | runtime.newosproc(0xc420026000, 0xc420035fc0)
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/os_linux.go:160 +0x194
code-kafka-cluster-1  | runtime.newm(0x5203a0, 0x0)
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/proc.go:1572 +0x132
code-kafka-cluster-1  | runtime.main.func1()
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/proc.go:126 +0x36
code-kafka-cluster-1  | runtime.systemstack(0x593600)
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/asm_amd64.s:298 +0x79
code-kafka-cluster-1  | runtime.mstart()
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/proc.go:1079
code-kafka-cluster-1  | 
code-kafka-cluster-1  | goroutine 1 [running]:
code-kafka-cluster-1  | runtime.systemstack_switch()
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/asm_amd64.s:252 fp=0xc420020768 sp=0xc420020760
code-kafka-cluster-1  | runtime.main()
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/proc.go:127 +0x6c fp=0xc4200207c0 sp=0xc420020768
code-kafka-cluster-1  | runtime.goexit()
code-kafka-cluster-1  |         /usr/lib/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200207c8 sp=0xc4200207c0
code-kafka-cluster-1  | Could not successfully bind to port 2181. Maybe some other service
code-kafka-cluster-1  | in your system is using it? Please free the port and try again.
code-kafka-cluster-1  | Exiting.
code-kafka-cluster-1 exited with code 1

Note: % sudo lsof -i :2181 – this command shows no output.

5

Answers


  1. Error suggests you’re running something else on port 2181 already. So either stop that, or remove the port mapping since you shouldn’t be connecting to Zookeeper anyway for using Kafka. As of latest Kafka versions (which I doubt the linked course will be using), --zookeeper flags are removed from Kafka CLI tools

    Other solution would be to not use the Landoop container. Plenty of other Docker Compose files exist on the web for Kafka

    Overall, I’d suggest not using Docker at all for developing a Kafka Connector.

    Login or Signup to reply.
  2. the landoop/fast-data-dev library is not working on arm64 Apple M1 chip.

    Here you can fix the problem by updating the Dockerfile.
    https://github.com/lensesio/fast-data-dev/issues/175#issuecomment-947001807

    Login or Signup to reply.
  3. Change the zookeeper port mapping as below

    ports:
          - 2182:2181                 # Zookeeper
    
    Login or Signup to reply.
  4. You can build new docker image and run it with the following commands –

    git clone https://github.com/faberchri/fast-data-dev.git
    cd fast-data-dev
    docker build -t faberchri/fast-data-dev .
    docker run --rm -p 3030:3030 faberchri/fast-data-dev
    
    Login or Signup to reply.
  5. After looking into Namig Aliyev answer, here is what worked for me.

    Let’s say your working directory is kafka and inside it, you have your file docker-compose.yml

    Please follow these steps to reproduce same results :

    1. git clone https://github.com/faberchri/fast-data-dev.git
    2. update docker-compose.yml file :
    • In kafka-cluster service replace image parameter line with this "build: ./fast-data-dev/"
    1. docker-compose run kafka-cluster

    Wait a couple of minutes and it should work and be accessible via :

    This what worked for me.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search