The dataset consists of 6,400 grayscale image sequences, each captured according to the methodology summarised in the problem description from several locations. Therefore, there are a total of 32,000 images in the dataset. Each image is of size 640x480 pixels. The coordinates of an object within the field of view (FOV) of an image is given by (x, y), where x is in the range [-0.5, 639.5] and y is in the range [-0.5, 479.5].
Download the data
The data for this competition (~4.2GB) is hosted at the open data repository "Zenodo" and will be accessible once the competition has started:
train contains one folder for each image sequence. Each folder contains 5 grayscale .png images ("frames") of size 640x480 that may be used for reference or as input for machine learning.
train_anno.json describes the coordinates of each objects for each sequence and frame of the train folder in .json-format.
test is organized the same as train. Your task is to construct an analogue test_anno.json (filename does not matter) for all images in this folder.
Details of the structure of the .json-files are described in the Submission Format page.
Disclaimer on ground truth coordinates
The ground truth coordinates in the training set were obtained by the organisers using a mix of automated and manual verifications. While significant effort has been devoted to produce and validate the labelled coordinates, the probability of error is not zero. If you grow certain that there are inaccuracies in a particular annotation, please write a mail to (act _AT_ esa.int) with the subject [SpotGEO - Labelling Error] describing the issue. Please keep in mind this will not affect the leaderboard as that is assembled on unreleased labels that you have no means to access. It is, though, of interest in cases where such occurences exist to signal them as to help us maintain a good dataset.
We created a starter-kit (this link will take you to "Zenodo") to help you get started working with the data. You will find validation and scoring code together with a simple baseline algorithm all written in Python.
Q1: Within one sequence, does each frame have the same number of objects?
A1: Yes, each frame in a sequence has the same number of objects, since we have ensured that each valid object lies in the common FOVs of all frames in the sequence. Note that this does not mean that a valid object is visible in all frames, since it can be occluded or is too dim in a subset of the frames.
Q2: Does each sequence contain a valid object?
A2: No, some sequences do not have valid objects.
Q3: What is the exact definition of an object?
A3: By object we mean GEO or near-GEO orbiting objects with consistently detectable presence (see below) in the FOV of the images. As explained in the problem description, such objects were imaged as blobs or short streaks. Note that we are not interested in low Earth orbit (LEO) objects that occasionally appeared in the FOV of the frames and were imaged as very long streaks (longer than the star streaks). Hence, LEO objects are not considered valid objects in this challenge. Also, blob-like artefacts due to sensor noise and bright pixels due to sensor defects should not be considered as objects. During our labelling procedure, an object was considered consistently detectable if it appeared in at least 3 out of 5 frames in the sequence. If an object appeared in only 1 or 2 frames in the sequence, it is not labelled as a valid object and should not be detected by the algorithm.
Q4: If an object is not observable in a frame in the sequence (but the other images support the presence of the object in the common FOV), what should the coordinates of the object be in the image in which it is not observable?
A4: A valid object should form a trajectory across the sequence according to GEO orbital motion. Algorithms should thus be able to estimate the coordinates of a detected object across all frames, including frames where it is not observable. See A3 above on consistently detectable objects.
Q5: Within a sequence, are the camera orientations for 5 frames the same?
A5: No. The camera orientations are different because the camera was rotated while recording a sequence. However, the same interframe rotational motion was used across consecutive frames. See diagram in the problem description.
Q6. What should I do if I suspect a labelling error?
A6. As mentioned above, please write a mail to (act _AT_ esa.int) with the subject [SpotGEO - Labelling Error] describing the issue.