[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Table of Contents

Distributed Training Strategies for the Structured Perceptron - RG report - UNDER CONSTRUCTION

Presentation

3 Structured Perceptron

4 Distributed Structured Perceptron

4.1 Parameter Mixing

4.2 Iterative Parameter Mixing

5 Experiments

Questions

Question 1

What are the weights w after running the stand perceptron training on the data bellow? The standard perceptron does only binary classification, i.e. y \in {0,1}
(unlike multi-class in Fig. 1) and its algorithm can be defined as follows:

StandardPerc(N, T={(x_1, y_1), ..., (x_|T|, y_|T|)})
w(0) = 0; k=0
for n:1..N
  for t:1..|T|
    y' = {if w(k) \dot x_t > 0.5: 1; else: 0}    // 1st difference to Figure1
    if (y_t != y')
      w(k+1) = w(k) + (y_t - y')*learning_rate*x_t    // 2nd difference to Figure1
      k++
return w(k)

Let us set learning_rate α = 0.3,
X = [(1, 0), (0, 1)] data
Y = [0, 1]
classes
T = {(x_t, y_t): \forall t: x_t = X[t], y_t = Y[t] }
N = 5

w = [?, ?]

Answer:

x_1 x_2 y w_1 w_2 x \dot w y' e = y - y' Δw_1 = α * e * x_1 Δw_2 = α * e * x_2
1000000000
0110000100.3
10000.300000
01100.30.30100.3
10000.600000
01100.60.61000

w = [0, 0.6]

Question 2

Imagine you want to solve the learning problem above by the multi-class perceptron algorithm (Figure 1). First you will need to figure out what the function f is like, there are many good possibilities. Will you get different w? Can you think of one that will yield the same w given all the other variables are the same?

f = ?
w = [?, ?]

Answer 1:

f(x,y) = (y == 0) ? (x, 0,0,…,0) : (0,0,…,0,x)

Answer 2:

Acording to English Wikipedia: This multiclass formulation reduces to the original perceptron when x is a real-valued vector, y is chosen from {0,1}, and f(x,y) = y * x.

However, I would say, that this holds only for activation treshold = 0. Therefore, this formula cannot be used to compute example from Question 1.

Question 3

In figure 4, why do you think that the F-measure for Regular Perceptron (first column) learned by the Serial (All Data) algorithm is worse than the Parallel (Iterative Parametere Mix)?

Answer:

Question 4

Imagine you are an engineer at Google (lots of zeros on your bank account, unlimited food, etc.. ;)) and want to learn your “w” by parallel perceptron (with iterative parameter mix). You are fed up with your work though so you want it finished in approximately 1 day so that you can read the all your unread Twitter posts. But Google has just launched an ecological campain and you need to justify the number of machines you are going to power on for your learning experiment to the eco-Manager, who makes sure that Google is running the optimal number of machines (N).

Your eco-Manager is a hobby mathematician and he will not give power to your machines unless he sees a nice formula that theoretically justifies the optimality of N with respect to energy wasted by running them (and with respect to sunshine wasted to produce the electricity to be wasted by them). Fortunately, you don't need to justify your 4 days requirement (he understands;)). Therefore, he would like you to include variables like:
T - size of your training data,
F - number of your features,
N - number of machines,
and other relevant factors, like lags, bandwidth, …

What variables will you need to include? Can you come up with a sketch of such formula that would get your machines the “juice” from the manager? ;) (hint: imagine T is like 100k, and f is 500 thousand)

N = argmax_N f(N, T, F, …)
f = ?

Answer:

We have not concluded on a particular formula.


[ Back to the navigation ] [ Back to the content ]