We’re closing in on the final stretch for the SpaceNet 5 Challenge that aims to extract road networks and route travel times directly from satellite imagery. Yet there’s still plenty of time to get involved, as our previous blog showed reasonable road mask predictions after only 10 hours of training.
Deep learning models for interpreting satellite imagery show increasing performance as the amount of training data is increased. In this post, we’ll recreate our first analysis with a whole new model architecture, to see what changes and what stays the same.
In support of SpaceNet 5’s rather complex challenge, this post walks readers through the steps necessary to prepare the data for the first step in our baseline: creating training masks for a deep learning segmentation model.
When it comes to the relationship between geospatial neural network performance and the amount of training data, do geographic differences matter?
When training a deep neural network to identify building footprints in satellite imagery, having more training data never hurts. But how much does more data help, and when is it worth the cost and difficulty of procuring it?
Using Solaris, you can fine-tune deep learning models pre-trained on overhead imagery for five minutes and achieve performance comparable to past SpaceNet Challenge prize-winners.