IDEA Community Talks

Privacy Attacks in the context of Machine Learning

This talk will be an introduction to the study of privacy attacks in the context of machine learning, aimed at those unfamiliar with the literature. We will discuss who the stakeholders are, what information may be attacked, how it may be attacked, and why. The “how” will be at a high level, illustrated through some specific examples of privacy attacks.

Tackling Health Inequity using Machine Learning Fairness, AI, and Optimization

Miao will discuss the tools and techniques used to assess, visualize, and improve equity in clinical trials. A set of novel equity metrics for clinical trials is constructed from Machine Learning (ML) Fairness Research to quantify inequities of various subgroups defined over multiple demographic or clinical characteristics, such as Hispanic female subjects who are underweight or no-Hispanic black male subjects aged over 64 and with high fasting glucose level.

The Dengue Spread Information System (DSIS)

Mosquitoes are responsible for transfer of many vector-borne diseases. Dengue is one such viral infection that is transmitted by the Aedes mosquito. It is preventable but still the number of Dengue cases have risen 30-fold in the past 50 years. In several countries in south American continent and Asia, dengue is one of the leading causes of death. It is mainly found in tropical and sub-tropical regions, particularly surrounding urban and semi-urban areas.

Privacy Preserving Synthetic Health Data Generation and Evaluation

Attempting to use real medical data in a classroom setting is hard to do without limiting yourself to specific datasets. Through the research being presented we work to create an end-to-end workflow for generating synthetic health data and testing the synthetic data for privacy, resemblance, and utility. This includes creating a novel generation method called HealthGAN and defining metrics for measuring the privacy and resemblance of the generated data. The utility of the data is then measured in the context of the analysis task the dataset was designed to accomplish.

Fit to read: Making your figures legible in print and on the slide

Rendering your images at high resolution is only the first step towards creating legible figures. We will discuss how to render figures so that they are readable on the page, and why sometimes using the same rendering of a figure for print and PowerPoint can leave everyone dissappointed.
 

Introduction to Differential Privacy

This talk will be aimed at an audience unfamiliar with the literature on privacy preservation (as I was a few weeks ago).  The goal of the talk will be to first illustrate that whether or not the output of some interaction with real data is privacy preserving is not as simple of a concept as it may first seem, motivating the need for a precise definition of privacy preservation. Then I will give one possible definition, that of differential privacy, invented in 2006, for which the authors were awarded the Gödel Prize in 2017.