If we are querying hundred of thousands of granules, we need to handle them in a more efficient way, perhaps implementing an iterator like
links = []
for g in Datagranules().concept_id("c-some-datasets").items():
links.append(g.data_links())
This idea is that we'll only load the current page into memory instead of the whole number of granules in a collection query.