Skip to content

[SPARK-4901] [SQL] Hot fix for ByteWritables.copyBytes #3742

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

chenghao-intel
Copy link
Contributor

HiveInspectors.scala failed in compiling with Hadoop 1, as the BytesWritable.copyBytes is not available in Hadoop 1.

@SparkQA
Copy link

SparkQA commented Dec 19, 2014

Test build #24641 has started for PR 3742 at commit bb04d1f.

  • This patch merges cleanly.

@SparkQA
Copy link

SparkQA commented Dec 19, 2014

Test build #24641 has finished for PR 3742 at commit bb04d1f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@AmplabJenkins
Copy link

Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24641/
Test PASSed.

@JoshRosen
Copy link
Contributor

LGTM. It looks like this properly uses getBytes() by also using getLength() to determine which subarray of the getBytes() result contains valid data (see HADOOP-6298).

I suppose this could use Arrays.copyOfRange to cut out one line (like was done in fc616d5#diff-364713d7776956cb8b0a771e9b62f82dR1435), but that's just a minor thing, so I'm going to merge this to fix the Hadoop 1 build. Thanks!

@asfgit asfgit closed this in 5479450 Dec 19, 2014
@chenghao-intel chenghao-intel deleted the settable_oi_hotfix branch July 2, 2015 08:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants