Skip to content

Commit 426042a

Browse files
committed
SPARK-1330 removed extra echo from comput_classpath.sh
remove the extra echo which prevents spark-class from working. Note that I did not update the comment above it, which is also wrong because I'm not sure what it should do. Should hive only be included if explicitly built with sbt hive/assembly or should sbt assembly build it? Author: Thomas Graves <[email protected]> Closes #241 from tgravescs/SPARK-1330 and squashes the following commits: b10d708 [Thomas Graves] SPARK-1330 removed extra echo from comput_classpath.sh
1 parent 5b2d863 commit 426042a

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

bin/compute-classpath.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ CLASSPATH="$SPARK_CLASSPATH:$FWDIR/conf"
3636
# Hopefully we will find a way to avoid uber-jars entirely and deploy only the needed packages in
3737
# the future.
3838
if [ -f "$FWDIR"/sql/hive/target/scala-$SCALA_VERSION/spark-hive-assembly-*.jar ]; then
39-
echo "Hive assembly found, including hive support. If this isn't desired run sbt hive/clean."
4039

4140
# Datanucleus jars do not work if only included in the uberjar as plugin.xml metadata is lost.
4241
DATANUCLEUSJARS=$(JARS=("$FWDIR/lib_managed/jars"/datanucleus-*.jar); IFS=:; echo "${JARS[*]}")

0 commit comments

Comments
 (0)