Docs

Research

Uniform priors for data-efficient transfer

Researchers including MIT Jameel Clinic principal investigator Marzyeh Ghassemi, authored a paper titled 'Uniform priors for data-efficient transfer,' about deep neaural networks and scalability of machine learning models.

From the paper's abstract, authors write, 'Deep neural networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenge. However, the ability to perform few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models.It is therefore crucial to understand what makes for good, transfer-able features in deep networks that best allow for such adaptation. In this paper, we shed light on this by showing that features that are most transferable have high uniformity in the embedding space and propose a uniformity regularization scheme that encourages better transfer and feature reuse. We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data, for which we conduct a thorough experimental study covering four relevant, and distinct domains: few-shot meta learning, deep metric learning, zero-shot domain adaptation, as well as out of-distribution classification. Across all experiments, we show that uniformity regularization consistently offers benefits over baseline methods and is able to achieve state-of-the-art performance in deep metric learning and meta learning.'

Details

author(s)
Marzyeh Ghassemi
publication date
13 October 2020
source
Arxiv
related programme
MIT Jameel Clinic
Link to publication
External link ->

Generative AI in the era of 'alternative facts'

|

MIT Open Publishing Services

External data and AI are making each other more valuable

|

Harvard Business Review Press

Removing biases from molecular representations via information maximisation

|

Arxiv

Effective human-AI teams via learned natural language rules and onboarding

|

Arxiv

A deep dive into single-cell RNA sequencing foundation models

|

bioRxiv

Antibiotic identified by AI

|

Nature

LLM-grounded video diffusion models

|

Arxiv

Successful Development of a Natural Language Processing Algorithm for Pancreatic Neoplasms and Associated Histologic Features

|

Pancreas

Leveraging artificial intelligence in the fight against infectious diseases

|

Science

BioAutoMATED: An end-to-end automated machine learning tool for explanation and design of biological sequences

|

Cell Systems

Conformal language modeling

|

Arxiv

Comparison of mammography AI algorithms with a clinical risk model for 5-year breast cancer risk prediction: An observational study

|

Radiological Society of North America

Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii

|

Nature

Algorithmic pluralism: A structural approach towards equal opportunity

|

Arxiv

Artificial intelligence and machine learning in lung cancer screening

|

Science Direct

Wide and deep neural networks achieve consistency for classification

|

PNAS

Autocatalytic base editing for RNA-responsive translational control

|

Nature

DiffDock: Diffusion steps, twists and turns for molecular docking

|

Arxiv

Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography

|

Journal of Clinical Oncology

Sequential multi-dimensional self-supervised learning for clinical time series

|

Proceedings of Machine Learning Research

Queueing theory: Classical and modern methods

|

Dynamic Ideas

Toward robust mammography-based models for breast cancer risk

|

Science

The age of AI: And our human future

|

Little, Brown and Company

Uniform priors for data-efficient transfer

|

Arxiv

Machine learning under a modern optimisation lens

|

Dynamic Ideas

The marginal value of adaptive gradient methods in machine learning

|

Advances in Neural Information Processing Systems

Efficient graph-based image segmentation

|

International Journal of Computer Vision

We use cookies on our site.