Today we release a larger 70B version of Giraffe, succeeding the 13B model we mentioned in a previous blog post. Giraffe is a family of models that are finetuned from base Llama 2 and use context length extension techniques to increase their effective context length…
The Abacus.AI team has published two papers to appear at the Conference on Neural Information Processing Systems (NeurIPS) 2022. NeurIPS is a top machine learning conference held every December. Abacus.AI is also planning a social event during NeurIPS – stay tuned for…
We open-sourced our debiasing module a few weeks ago.
In this post, we give an introduction to bias in computer vision models, we discuss our new research on debiasing models, and we show how you can debias your own model with our open-source code.
The use of facial…
Neural architecture search (NAS) is a popular area of machine learning, with the goal of automating the development of the best neural network for a given dataset. Since 2017, hundreds of NAS algorithms have been proposed, and with the recent release of two NAS benchmark…
In this post, we give an introduction to bias in machine learning, and we discuss our new research for debiasing pretrained neural networks (i.e., post-hoc debiasing). ArXiv paper: https://arxiv.org/abs/2006.08564 Source code…
Like many other Machine Learning concepts, meta-learning is an approach akin to what human beings are already used to doing. Meta-learning simply means “learning to learn.” Whenever we learn any new skill there is some prior experience we can relate to, which makes the…
In the last decade or so, there has been a growing number of applications of machine learning in domains that have a significant and direct impact on human life. In these domains, it is common to find that machine learning models end up learning societal biases and…
In this post, we discuss a new state-of-the-art algorithm for neural architecture search.
Arxiv paper:https://arxiv.org/abs/1910.11858
Source code:https://www.github.com/naszilla/bananas
Image source…