Storing a subset of the data

#1
by nilsleh - opened

Hi @csaybar , thank you for this interesting dataset.

I was wondering about the following: I am interested in only using a subset of the data and saw the section about a mini-taco in the demo colab notebook. I was wondering if there is an additional functionality to then only save this particular subset to disk as a "subset" version of the full dataset, but I could not directly find documentation on this (or just have been looking in the wrong place). Pure remote accessing the dataset is a bit too slow unfortunately. Thanks in advance.

Hi @nilsleh ,

We're still working on the spec, but expect to have the full documentation ready before July.
You can load both local and remote TACO datasets using the same function:

dataset = tacoreader.load("/home/user/file.taco") [local]

or

dataset = tacoreader.load("https://huggingface.co/datasets/tacofoundation/cloudsen12/resolve/main/cloudsen12-l1c.0000.part.taco") [remote]

Hope this helps!

Thank you for your reply. More specifically, I was wondering whether you could store a subset separatly from the original dataset. For example, if I start with cloudsen12-l1c which has 5 parts, but then I create a mini-taco like in the colab notebook, can I create a new taco dataset separatly, so that I don't need the original 5 parts anymore but only my single new taco subset? So after I have defined my new taco subset, I can delete the original 5 parts to save disk space. Is that possible?

Once you create a "minitaco" (tacoreader.compile) you no longer need the initial dataset, as it compiles into an isolated subset of the original dataset. With TACO, you can easily combine multiple TACO datasets and download only the samples you need—that’s more or less our design philosophy. Is that what you were asking?

The workflow we usually follow is to select the samples you need "online/remote" based on certain criteria, compile them (tacoreader.compile), and then share those samples with your colleges.

Awesome, yes that is exactly what I was wondering, this is super convenient. Thank you!

nilsleh changed discussion status to closed

Sorry, I have another question, is there a concise way to combine multiple datasources and align them?

For example in your colab you do the following, and also have a separate function for loading the extra metadata:

# Function to load Sentinel-2 data and labels
def load_sentinel_data(cloudsen12_l1c, cloudsen12_l2a, sample_idx):
    s2_l1c = cloudsen12_l1c.read(sample_idx).read(0)
    s2_l2a = cloudsen12_l2a.read(sample_idx).read(0)
    s2_label = cloudsen12_l2a.read(sample_idx).read(1)

    with rio.open(s2_l1c) as s2_l1c_src, rio.open(s2_l2a) as s2_l2a_src, rio.open(s2_label) as lbl:
        s2_l1c_data = s2_l1c_src.read([4, 3, 2])
        s2_l2a_data = s2_l2a_src.read([4, 3, 2])
        s2_label_data = lbl.read()

    return s2_l1c_data, s2_l2a_data, s2_label_data

Would it also be possible to filter each collection by the same criteria, for example, only a given geospatial extent and then create an aligned new mini-taco, where, if you read a particular sample_idx you get all the desired data files under this index so new_mini_taco.read(0) could load l1c, l2a, label, vv, and vh at once, without having to index three different tacos by the same sample_idx. Thanks in advance.

nilsleh changed discussion status to open

Sign up or log in to comment