Preloader image
   

Artificial Intelligence

Visual Perception with Deep Learning

A long-term goal of Machine Learning research is to solve highly complex “intelligent” tasks, such as visual perception auditory perception, and language understanding. To reach that goal, the ML community must solve two problems: the Deep Learning Problem, and the Partition Function Problem.   There is considerable theoretical and empirical evidence that complex tasks, such as invariant object recognition in vision, require “deep” architectures, composed of multiple layers of trainable non-linear modules. The Deep Learning Problem is related to the difficulty of training such deep architectures.   Several methods have recently been proposed to train (or pre-train) deep architectures in…

Recent Developments in Deep Learning

Deep networks can be learned efficiently from unlabeled data. The layers of representation are learned one at a time using a simple learning module that has only one layer of latent variables. The values of the latent variables of one module form the data for training the next module. Although deep networks have been quite successful for tasks such as object recognition, information retrieval, and modeling motion capture data, the simple learning modules do not have multiplicative interactions which are very useful for some types of data.   The talk will show how to introduce multiplicative interactions into the basic…

Bay Area Vision Meeting: Unsupervised Feature Learning and Deep Learning

Despite machine learning’s numerous successes, applying machine learning to a new problem usually means spending a long time hand-designing the input representation for that specific problem. This is true for applications in vision, audio, text/NLP, and other problems. To address this, researchers have recently developed “unsupervised feature learning” and “deep learning” algorithms that can automatically learn feature representations from unlabeled data, thus bypassing much of this time-consuming engineering. Building on such ideas as sparse coding and deep belief networks, these algorithms can exploit large amounts of unlabeled data (which is cheap and easy to obtain) to learn a good feature…

O’Reilly Webcast: Deep Learning – The Biggest Data Science Breakthrough of the Decade

Machine learning and AI have appeared on the front page of the New York Times three times in recent memory: 1) When a computer beat the world’s #1 chess player 2) When Watson beat the world’s best Jeopardy players 3) When deep learning algorithms won a chemo-informatics Kaggle competition.   We all know about the first two… but what’s that deep learning thing about? This happened in November of last year, and it represents a critical breakthrough in data science that every executive will need to know about and react to in the coming years. The NY Times said that…

Machine learning – Deep learning I

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task. Machine learning algorithms build a mathematical model of sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to perform the task.  

Trends in Deep Learning

This talk gives a brief history of deep learning architectures, moving into modern trends and research in the field. Key points of discussion are neural activation functions, weight optimization strategies, techniques for hyper-parameter selection, and example architectures for different problem sets. We finish with a few notable examples of “web scale” deep learning at work.   This talk will focus on (briefly) sklearn, Theano, pylearn2, theanets, and hyperopt.  

Machine Learning Discussion Group – Deep Learning w/ Stanford AI Lab (1 of 3)

Adam Coates will give an overview of some recent research projects from the Stanford Artificial Intelligence Lab and will do a presentation with open discussion on Deep Learning, an exciting recent addition to the machine learning algorithm family. The format will be interactive as Adam will answer questions from the group. So this will be a great opportunity to learn from one of the authorities on this exciting topic.  

Deep Learning of Representations

Yoshua Bengio will give an introduction to the area of Deep Learning, to which he has been one of the leading contributors. It is aimed at learning representations of data, at multiple levels of abstraction. Current machine learning algorithms are highly dependent on feature engineering (manual design of the representation fed as input to a learner), and it would be of high practical value to design algorithms that can do good feature learning. The ideal features are disentangling the unknown underlying factors that generated the data. It has been shown both through theoretical arguments and empirical studies that deep architectures…

How To Create A Mind: Ray Kurzweil at TEDx Silicon Alley

In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.  

Deep Learning VM Images

Imagine if you could avoid the headache of setting up new libraries, configuring them, and making sure they are all compatible. In this episode of AI Adventures, Yufeng shows you how to take advantage of deep learning VM images on Google Compute Engine to make setting up new environments a piece of cake.  

Getting Started with TensorFlow.js

TensorFlow.js is an ecosystem of JavaScript based tools for training and deploying machine learning models. In this episode of AI Adventures, learn all about getting started with Tensorflow.js through tutorials like training a convolutional neural network in your browser and building a Pac-Man game that’s played with data from your webcam! This is only a beginning… stay tuned for deep dives on TensorFlow.js coming soon!  

Scaling up Keras with Estimators (AI Adventures)

When you convert a Keras model to a TensorFlow Estimator, you get the best of both worlds: easy to read Keras model syntax along with distributed training with TensorFlow. In this episode of AI Adventures, Yufeng shows you how to scale up a Keras model with estimators so that it can run larger datasets or across many machines. Plus, it makes it easy to do model serving once the training is complete!  

Getting Started with Keras (AI Adventures)

Getting started with Keras has never been easier! Not only is it built into TensorFlow, but when you combine it with Kaggle Kernels you don’t have to install anything! Plus you get to take advantage of the resources from the Kaggle community. In this episode of AI Adventures, Yufeng shows you how to get started with Keras. Take a look!  

Serving Scikit-learn Models at Scale

Scikit-learn is a great tool for building your models. When it comes time to deploy them to prediction, scale up using Google Cloud ML Engine. In this episode of AI Adventures, Yufeng shows you how to set up your own deployment pipeline with scikit-learn so you can go back to focusing on tuning your model!  

error: Context Menu disabled!