The details of how to spark-images are build in different layers can be created can be read through the blog post written by André Perez on Medium blog -Towards Data Science
# Build Spark Images
./build.sh
# Create Network
docker network create kafka-spark-network
# Create Volume
docker volume create --name=hadoop-distributed-file-system
# Start Docker-Compose (within for kafka and spark folders)
docker compose up -d
In depth explanation of Kafka Listeners
Explanation of Kafka Listeners
# Stop Docker-Compose (within for kafka and spark folders)
docker compose down
# Delete all Containers
docker rm -f $(docker ps -a -q)
# Delete all volumes
docker volume rm $(docker volume ls -q)