As data privacy technologies mature, how do you choose the right one? Here is an overview of the promising partial-trust technologies.
In this Q&A on Explainable AI, Andrea Brennen speaks with Lab41 data scientist Nina Lopatina. Nina discusses different approaches to interpreting machine learning systems and points readers to several helpful open source tools and resources.
Dataviz.cafe is a public resource curated by IQT Labs for anyone interested in open-source software for data visualization. With over 700 software packages — summarized and tagged by data type, programming language, and other keywords — dataviz.cafe is designed to help people find free visualization tools for a wide variety of use-cases.
In this Q&A on Explainable AI, Andrea Brennen speaks with In-Q-Tel’s Peter Bronez about descriptive vs. prescriptive models, “white box” vs. “black box” explanation techniques, and why some models are easier to explain than others.
Interspeech 2019, held in Graz Austria, saw experts from around the world gathering to discuss some of the most recent advances in technologies at the crossroads of speech and language. At Lab41 we were excited to co-host, together with SRI International, one of 10 special sessions and challenges, the VOiCES from a distance challenge.
Cyphercat, a research project out of IQT Labs, helps determine if training data is safe. Cyphercat measures the privacy risks that arise from sharing access to models trained on private data and enables safe and informed model sharing.