-
Notifications
You must be signed in to change notification settings - Fork 702
Open
Description
Hi,
I met an issue. Anyone can help? Thanks in advance.
After deploy docker-spark to a server(192.168.10.8), I try to test by another server(192.168.10.7).
same version spark has been installed on 192.168.10.7.
cmd steps:
spark-shell --master spark://192.168.10.8:7077 --total-executor-cores 1 --executor-memory 512M
# xxxx
# some output here
# xxxx
val textFile = sc.textFile("file:///opt/spark/README.md");
textFile.first();
I got below error.(infinite loop messages)
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
jaylenwang7
Metadata
Metadata
Assignees
Labels
No labels