The fifth SpaceNet Challenge will launch in just a few weeks, focused on road networks and optimized routing via travel time estimates. In preparation for SpaceNet 5, this post discusses how one might build upon the results from open challenges, such as the first SpaceNet roads challenge (SpaceNet 3).
Archives for July 2019
LAb41 discusses VOiCES, the first open source speech data-set recorded in real environments with far-field microphones, capturing reverberant acoustics and background noise.
Performing machine learning (ML) and analyzing geospatial data are both hard problems requiring a lot of domain expertise. These limitations have historically meant that one needs to be an expert in both to perform even the most basic analyses, making advances in AI for overhead imagery difficult to achieve. Is there anything we can do to reduce this barrier to entry, making it easier to apply machine learning methods to overhead imagery data?
Artificial Intelligence (AI) is everywhere, with applications ranging from medical diagnosis to autonomous driving. As use of AI and Machine Learning (ML) becomes increasingly common across industries and functions, interdisciplinary stakeholders are searching for ways to understand the systems they are using so that they can trust the decisions such systems inform.
In a previous blog post, we asked how geospatial deep learning performance varies with the amount of training data, and we trained a building footprint detector with different amounts of data to see that variation in action. This blog dives into the details of how it was done and what it all means.
How much data do I need to train my neural network? In this blog post, we will explore that question and answer it in the context of a specific case from the field of remote sensing imagery analysis. We’ll show that small amounts of data can perform surprisingly well.