This post showcases how Solaris can be used for car segmentation (and detection/localization by proxy) using one of the models featured in SpaceNet 4. Although SpaceNet is focused on foundational infrastructure mapping challenges, the lessons learned and code developed can now easily be transferred to take on new problems (like vehicle detection) all thanks to Solaris.
The fifth SpaceNet Challenge will launch in just a few weeks, focused on road networks and optimized routing via travel time estimates. In preparation for SpaceNet 5, this post discusses how one might build upon the results from open challenges, such as the first SpaceNet roads challenge (SpaceNet 3).
Performing machine learning (ML) and analyzing geospatial data are both hard problems requiring a lot of domain expertise. These limitations have historically meant that one needs to be an expert in both to perform even the most basic analyses, making advances in AI for overhead imagery difficult to achieve. Is there anything we can do to reduce this barrier to entry, making it easier to apply machine learning methods to overhead imagery data?
In a previous blog post, we asked how geospatial deep learning performance varies with the amount of training data, and we trained a building footprint detector with different amounts of data to see that variation in action. This blog dives into the details of how it was done and what it all means.
How much data do I need to train my neural network? In this blog post, we will explore that question and answer it in the context of a specific case from the field of remote sensing imagery analysis. We’ll show that small amounts of data can perform surprisingly well.
Discusses the results of CosmiQ Works’ study on super-resolution and its effect on object detection performance.