This post showcases how Solaris can be used for car segmentation (and detection/localization by proxy) using one of the models featured in SpaceNet 4. Although SpaceNet is focused on foundational infrastructure mapping challenges, the lessons learned and code developed can now easily be transferred to take on new problems (like vehicle detection) all thanks to Solaris.
The fifth SpaceNet Challenge will launch in just a few weeks, focused on road networks and optimized routing via travel time estimates. In preparation for SpaceNet 5, this post discusses how one might build upon the results from open challenges, such as the first SpaceNet roads challenge (SpaceNet 3).
Performing machine learning (ML) and analyzing geospatial data are both hard problems requiring a lot of domain expertise. These limitations have historically meant that one needs to be an expert in both to perform even the most basic analyses, making advances in AI for overhead imagery difficult to achieve. Is there anything we can do to reduce this barrier to entry, making it easier to apply machine learning methods to overhead imagery data?
In a previous blog post, we asked how geospatial deep learning performance varies with the amount of training data, and we trained a building footprint detector with different amounts of data to see that variation in action. This blog dives into the details of how it was done and what it all means.
How much data do I need to train my neural network? In this blog post, we will explore that question and answer it in the context of a specific case from the field of remote sensing imagery analysis. We’ll show that small amounts of data can perform surprisingly well.
Discusses the results of CosmiQ Works' study on super-resolution and its effect on object detection performance.
The SpaceNet partners are pleased to announce SpaceNet 5: Road Networks and Optimized Routing. This public competition will challenge competitors to automatically extract road networks from satellite imagery, along with travel time estimates along all roadways, thereby permitting true optimal routing.
This post discusses how to apply these trained deep learning object detection models to large test images and visualize results. It also discusses some of the new features of the recently released SIMRDWN v2, such as incorporation of the latest TensorFlow models and YOLO v3.
In this final post of our series about the challenge, I’ll explore the types of buildings that models identified well and geographic features that presented a challenge to the competitors.
The SpaceNet Challenge Round 4: Off-Nadir Building Detection Challenge is complete! This blog highlights a few key differentiators that improved segmentation in the winning algorithms.
Highlights results from The SpaceNet Challenge: Round 4, Off-Nadir Building Footprint Extraction. The winning solutions represented a 1.5-fold improvement over the initial baseline model’s performance.
The SIMRDWN framework extends popular object detection algorithms to operate in the overhead imagery domain. This blog discusses training a model to detect cars in overhead images.