Neural networks are important because of their ability to approximate any relationship between input variables and target variables. In practice they tend to be difficult to train and difficult to interpret, but excel at pattern recognition tasks in images and sound. A new field of neural networks, known as deep learning, has been responsible for some of the most important recent breakthroughs in machine learning and artificial intelligence.
-
Overview of training neural networks in Enterprise Miner - Blackboard electronic reserves
-
The Definitive Performance Tuning Guide for H2O Deep Learning
-
Predictive Modeling and Neural Networks in Enterprise Miner - Blackboard electronic reserves
-
Introduction to Data Mining
Section 5.4 -
Elements of Statistical Learning
Chapter 11 -
Pattern Recognition in Machine Learning
Chapter 5 -
Deep Learning
Chapters 6 - 9 -
Learning Representations by Back-Propogating Error
The seminal back-propagation paper by Geoffrey Hinton from 1986 -
Gradient-Based Learning Applied to Document Recognition
The seminal deep learning and convolutional neural network paper from 1998 by Yann Lecun -
Reducing the Dimensionality of Data Using Neural Networks
The seminal deep learning paper from 2006 by Geoffrey Hinton -
Papers about problems with neural networks:
-
Neural Network Zoo article
Quick summary of the many different types of neural networks
-
My Quora answers regarding standard neural networks and deep learning
-
Neural network FAQ by Warren Sarle: ftp://ftp.sas.com/pub/neural/FAQ.html#A2
More than you ever wanted to know about traditional neural networks (some info may be dated and/or obsolete.) -
MNIST Data