Zoom In: An Introduction to Circuits
By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.
Found 128 articles
By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.
Training an end-to-end differentiable, self-organising cellular automata model of morphogenesis, able to both grow and regenerate specific patterns.
Exploring the baseline input hyperparameter, and how it impacts interpretations of neural network behavior.
Detailed derivations and open-source code to analyze the receptive fields of convnets.
A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency
This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . We want to thank all the comm...
Section 3.2 of Ilyas et al. (2019) shows that training a model on only adversarial errors leads to non-trivial generalization on the original test set. We show that these experiments are a specific ca...
Refining the source of adversarial examples
An experiment showing adversarial robustness makes neural style transfer work on a non-VGG architecture
An example project using webpack and svelte-loader and ejs to inline SVGs
An example project using webpack and svelte-loader and ejs to inline SVGs
The main hypothesis in Ilyas et al. (2019) happens to be a special case of a more general principle that is commonly accepted in the robustness to distributional shift literature
What we'd like to find out about GANs that we don't know yet.
How to turn a collection of small building blocks into a versatile tool for solving regression problems.
Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding.
By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it ...
If we want to train AI to do what humans want, we need to study humans.
An Update from the Editorial Team
A powerful, under-explored tool for neural network visualizations and art.
A simple and surprisingly effective family of conditioning mechanisms.