Versions Compared
compared with
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Comment:
Migration of unmigrated content due to installation of a new plugin
Table of Contents |
---|
Welcome!
This is Lizhong Zheng. I am a professor in EECS at MIT. I work in the area of statistical data analysis. I make this page to keep some of the experiments and demos related to some of our recent research works. The goal is not only to show you the codes, but also some explanations on why we did it in the first place, together with a little math, as well as some pointers to help you to run and make it your own experiments. That got to be fun, ain't it! (This actually helps me to take a break from writing formal research papers, to write something casual instead, which is fun too!)
The main question we try to answer on this page is:
Why are neural networks so powerful?
aaa
statistical relationship between the features and the labels that can be represented by this data structure.
aaa
Mathinline | ||
---|---|---|
|
Code Block | ||||
---|---|---|---|---|
| ||||
import numpy as np import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, Activation from keras.optimizers import SGD X= np.zeros([N, Dx]) Labels=np.zeros([N, Cy]) # neural network takes the indicators instead of Y for i in range(N): X[i, :]=M[:, Y[i]] Labels[i, Y[i]]=1 X=X+ np.random.normal(0, .5, X.shape) |
Navigate space
Page Tree Search |
---|
Page Tree |
---|