Skip to content

Adding first, last and describe convenience functions. #42

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

codetalker7
Copy link

@codetalker7 codetalker7 commented Aug 9, 2023

Still a draft PR.

This PR adds implementations of the first, last and describe convenience functions.

Examples

Here is how first works:

julia> using DTables;

julia> table = (a=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], b=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);

julia> d = DTable(table, 3)
DTable with 4 partitions
Tabletype: NamedTuple

julia> first_five = DTables.first(d, UInt(5))
DTable with 2 partitions
Tabletype: NamedTuple

julia> fetch(first_five)
(a = [1, 2, 3, 4, 5], b = [1, 2, 3, 4, 5])

TODO:

  • Finalize implementation of first.
  • Finalize implementation of last.
  • Finalize implementation of describe.
  • Add documentation.
  • Add relevant tests.

@codetalker7
Copy link
Author

@jpsamaroo how does this implementation of first look? If this is okay, I'll implement last similarly.

return table
end

chunk_length = chunk_lengths(table)[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chunk lengths are not guarenteed to be equal
some may even be empty

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @krynju. If this is the case, is there any way to retrieve the original chunksize? If I'm not wrong it's not stored as a property of DTables.

On another note: suppose for a DTable I have chunksize greater than number of rows in the table. In that case, won't I lose information about what chunksize I passed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it was an early design decision to make chunksize an argument of the constructor for the initial partitioning and later ignore it (and for that reason not store it either)

I think including the original chunksize in the logic would also be a bit confusing and would make it more complex, but if we have any use case for that then we can revisit this

I did think of caching the current chunk sizes, because generally that information doesn't change in a dtable (after you manipulate a dtable it becomes a new dtable)
We already cache the schema so a similar mechanism could be used

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On another note: suppose for a DTable I have chunksize greater than number of rows in the table. In that case, won't I lose information about what chunksize I passed?

You will and you will only get one partition in the dtable

Copy link
Author

@codetalker7 codetalker7 Aug 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it was an early design decision to make chunksize an argument of the constructor for the initial partitioning and later ignore it (and for that reason not store it either)

I think including the original chunksize in the logic would also be a bit confusing and would make it more complex, but if we have any use case for that then we can revisit this

I did think of caching the current chunk sizes, because generally that information doesn't change in a dtable (after you manipulate a dtable it becomes a new dtable) We already cache the schema so a similar mechanism could be used

@krynju how about this: to get the chunksize, can I get the maximum value from chunk_lengths? Certainly this maximum should be the original chunksize, except for a boundary case where chunksize is greater than the number of rows.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, not guaranteed. Why do you need the original chunksize?

Comment on lines 275 to 277
extra_chunk_rows = rowtable(fetch(extra_chunk))
new_chunk = Dagger.tochunk(sink(extra_chunk_rows[1:needed_rows]))
required_chunks = vcat(table.chunks[1:num_full_chunks], [new_chunk])
Copy link
Member

@krynju krynju Aug 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's better to do this with with Dagger.@spawn and make the last dtable chunk just a thunk (so result of Dagger.@spawn)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krynju how does it look now? I've used the maximum among all chunk_lengths to get the original chunk size, and made the last chunk a thunk.

Copy link
Member

@krynju krynju Sep 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use the actual chunk lengths and not a maximum of them

When you call first(d,50) it should go something like this
(not valid code, just writing the idea)

s = 50
csum=0
chunks = []
for (cl,chunk) in zip(chunk_lengths(d), d.chunks)
    if csum + cl > s 
        # do the thing with spawn, this is the last one and we need to make a thunk from it and cut it
        push!(chunks, the_cut_thunk)
    else
        csum += cl
        push!(chunks, chunk)
     end
     return DTable(chunks)
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants