We’re closing in on the final stretch for the SpaceNet 5 Challenge that aims to extract road networks and route travel times directly from satellite imagery. Yet there’s still plenty of time to get involved, as our previous blog showed reasonable road mask predictions after only 10 hours of training.
Deep learning models for interpreting satellite imagery show increasing performance as the amount of training data is increased. In this post, we’ll recreate our first analysis with a whole new model architecture, to see what changes and what stays the same.
In support of SpaceNet 5's rather complex challenge, this post walks readers through the steps necessary to prepare the data for the first step in our baseline: creating training masks for a deep learning segmentation model.
When it comes to the relationship between geospatial neural network performance and the amount of training data, do geographic differences matter?
When training a deep neural network to identify building footprints in satellite imagery, having more training data never hurts. But how much does more data help, and when is it worth the cost and difficulty of procuring it?
Using Solaris, you can fine-tune deep learning models pre-trained on overhead imagery for five minutes and achieve performance comparable to past SpaceNet Challenge prize-winners.
Now that the SpaceNet 5 dataset has been released, and the challenge is live on Topcoder, we anticipate a great many insights from this challenge into how well computer vision can automatically extract road networks and travel time estimates.
The breadth of challenges that can be addressed by overhead imagery is impressively broad, and continues to grow as new and improved systems are deployed. Yet a lack of high quality labeled training data continues to impede progress in many areas of remote sensing analytics. To this end, the SpaceNet partners are proud to report the release of the SpaceNet 5 dataset, which adds significantly to the SpaceNet data corpus.
Optimal route selection from remote sensing imagery remains a significant challenge despite its importance in a broad array of applications. While identification of road pixels has been attempted before, estimation of route travel times from overhead imagery remains a novel problem.
This post showcases how Solaris can be used for car segmentation (and detection/localization by proxy) using one of the models featured in SpaceNet 4. Although SpaceNet is focused on foundational infrastructure mapping challenges, the lessons learned and code developed can now easily be transferred to take on new problems (like vehicle detection) all thanks to Solaris.
The fifth SpaceNet Challenge will launch in just a few weeks, focused on road networks and optimized routing via travel time estimates. In preparation for SpaceNet 5, this post discusses how one might build upon the results from open challenges, such as the first SpaceNet roads challenge (SpaceNet 3).
Performing machine learning (ML) and analyzing geospatial data are both hard problems requiring a lot of domain expertise. These limitations have historically meant that one needs to be an expert in both to perform even the most basic analyses, making advances in AI for overhead imagery difficult to achieve. Is there anything we can do to reduce this barrier to entry, making it easier to apply machine learning methods to overhead imagery data?