Neural Network. Sensors Sorter

Like dokumenter
Slope-Intercept Formula

Exercise 1: Phase Splitter DC Operation

Unit Relational Algebra 1 1. Relational Algebra 1. Unit 3.3

Gir vi de resterende 2 oppgavene til én prosess vil alle sitte å vente på de to potensielt tidskrevende prosessene.

Moving Objects. We need to move our objects in 3D space.

TMA4329 Intro til vitensk. beregn. V2017

Trigonometric Substitution

5 E Lesson: Solving Monohybrid Punnett Squares with Coding

Level Set methods. Sandra Allaart-Bruin. Level Set methods p.1/24

Universitetet i Bergen Det matematisk-naturvitenskapelige fakultet Eksamen i emnet Mat131 - Differensiallikningar I Onsdag 25. mai 2016, kl.

Databases 1. Extended Relational Algebra

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

HONSEL process monitoring

SVM and Complementary Slackness

IN2010: Algoritmer og Datastrukturer Series 2

Generalization of age-structured models in theory and practice

Exam in Quantum Mechanics (phys201), 2010, Allowed: Calculator, standard formula book and up to 5 pages of own handwritten notes.

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

EKSAMENSOPPGAVE I FAG TKP 4105

FYSMEK1110 Eksamensverksted 23. Mai :15-18:00 Oppgave 1 (maks. 45 minutt)

UNIVERSITETET I OSLO

TDT4117 Information Retrieval - Autumn 2014

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

TFY4170 Fysikk 2 Justin Wells

Appendix B, not for publication, with screenshots for Fairness and family background

Dynamic Programming Longest Common Subsequence. Class 27

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

Speed Racer Theme. Theme Music: Cartoon: Charles Schultz / Jef Mallett Peanuts / Frazz. September 9, 2011 Physics 131 Prof. E. F.

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

Mathematics 114Q Integration Practice Problems SOLUTIONS. = 1 8 (x2 +5x) 8 + C. [u = x 2 +5x] = 1 11 (3 x)11 + C. [u =3 x] = 2 (7x + 9)3/2

Graphs similar to strongly regular graphs

Andrew Gendreau, Olga Rosenbaum, Anthony Taylor, Kenneth Wong, Karl Dusen

Ma Linær Algebra og Geometri Øving 1

Enkel og effektiv brukertesting. Ida Aalen LOAD september 2017

FIRST LEGO League. Härnösand 2012

UNIVERSITETET I OSLO

UNIVERSITETET I OSLO

UNIVERSITETET I OSLO

Gradient. Masahiro Yamamoto. last update on February 29, 2012 (1) (2) (3) (4) (5)

Examination paper for (BI 2015) (Molekylærbiologi, laboratoriekurs)

Physical origin of the Gouy phase shift by Simin Feng, Herbert G. Winful Opt. Lett. 26, (2001)

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

Medisinsk statistikk, KLH3004 Dmf, NTNU Styrke- og utvalgsberegning

Dagens tema: Eksempel Klisjéer (mønstre) Tommelfingerregler

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

KROPPEN LEDER STRØM. Sett en finger på hvert av kontaktpunktene på modellen. Da får du et lydsignal.

Dialogkveld 03. mars Mobbing i barnehagen

0:7 0:2 0:1 0:3 0:5 0:2 0:1 0:4 0:5 P = 0:56 0:28 0:16 0:38 0:39 0:23

Oppgave. føden)? i tråd med

Maple Basics. K. Cooper

Instructions for the base (B)-treatment and the elicitation (E)-treatment of the experiment

The exam consists of 2 problems. Both must be answered. English

Endelig ikke-røyker for Kvinner! (Norwegian Edition)

Hvor mye praktisk kunnskap har du tilegnet deg på dette emnet? (1 = ingen, 5 = mye)

Oppgave 1a Definer følgende begreper: Nøkkel, supernøkkel og funksjonell avhengighet.

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

EKSAMENSOPPGAVE I BI2034 Samfunnsøkologi EXAMINATION IN: BI Community ecology

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

TUSEN TAKK! BUTIKKEN MIN! ...alt jeg ber om er.. Maren Finn dette og mer i. ... finn meg på nett! Grafiske lisenser.

Welcome to one of the world s coolest golf courses!

TUSEN TAKK! BUTIKKEN MIN! ...alt jeg ber om er.. Maren Finn dette og mer i. ... finn meg på nett! Grafiske lisenser.

Du må håndtere disse hendelsene ved å implementere funksjonene init(), changeh(), changev() og escape(), som beskrevet nedenfor.

Gol Statlige Mottak. Modul 7. Ekteskapsloven

Stationary Phase Monte Carlo Methods

TUSEN TAKK! BUTIKKEN MIN! ...alt jeg ber om er.. Maren Finn dette og mer i. ... finn meg på nett! Grafiske lisenser.

Satellite Stereo Imagery. Synthetic Aperture Radar. Johnson et al., Geosphere (2014)

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

Hvor mye teoretisk kunnskap har du tilegnet deg på dette emnet? (1 = ingen, 5 = mye)

Innovasjonsvennlig anskaffelse

Right Triangle Trigonometry

Improving Customer Relationships

Cylindrical roller bearings

stjerneponcho for voksne star poncho for grown ups

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

Information search for the research protocol in IIC/IID

Løsning til deleksamen 2 i SEKY3322 Kybernetikk 3

NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET Side 1 av 5 INSTITUTT FOR ENERGI- OG PROSESSTEKNIKK

HØGSKOLEN I NARVIK - SIVILINGENIØRUTDANNINGEN

FINAL EXAM IN STA-2001

UNIVERSITETET I OSLO

UNIVERSITETET I OSLO ØKONOMISK INSTITUTT

Estimating Peer Similarity using. Yuval Shavitt, Ela Weinsberg, Udi Weinsberg Tel-Aviv University

FASMED. Tirsdag 21.april 2015

MID-TERM EXAM TDT4258 MICROCONTROLLER SYSTEM DESIGN. Wednesday 3 th Mars Time:

International Economics

SAS FANS NYTT & NYTTIG FRA VERKTØYKASSA TIL SAS 4. MARS 2014, MIKKEL SØRHEIM

The regulation requires that everyone at NTNU shall have fire drills and fire prevention courses.

GEF2200 Atmosfærefysikk 2017

Verifiable Secret-Sharing Schemes

PATIENCE TÅLMODIGHET. Is the ability to wait for something. Det trenger vi når vi må vente på noe

Second Order ODE's (2P) Young Won Lim 7/1/14

32.2. Linear Multistep Methods. Introduction. Prerequisites. Learning Outcomes

Ole Isak Eira Masters student Arctic agriculture and environmental management. University of Tromsø Sami University College

How Bridges Work Sgrad 2001

Den som gjør godt, er av Gud (Multilingual Edition)

Call function of two parameters

Quantitative Spectroscopy Quantitative Spectroscopy 2 - Algorithms

Transkript:

CSC 302 1.5 Neural Networks Simple Neural Nets for Pattern Recognition 1

Apple-Banana Sorter Neural Network Sensors Sorter Apples Bananas 2

Prototype Vectors Measurement vector p = [shape, texture, weight] T Shape: {1: round, -1: elliptical} Texture: {1: smooth, -1: rough} Weight: {1: > 1 lb, -1: < 1 lb} Prototype banana p 1 = [-1, 1, -1] T Prototype apple p 2 = [1, 1, -1] T 3

Perceptron p R*1 W S*R a S*1 + n S*1 R 1 b S*1 S a = hardlims (Wp + b) 4

Two-Input Case p 2 p 1 w 1,1 n a n > 0 2 W n < 0 p 2 w 1,2 1 b -2 2 p 1 Can classify input vectors into two categories. a = hardlims ([ 1 2 ]p + (-2)) where b = -2 Decision boundary Wp + b = 0 [ 1 2 ]p + (-2) = 0 or p 1 + 2p 2 2 = 0 Recognize only linear separable patterns. w 1, 1 = 1 w 1, 2 = 2 5

Apple/Banana Example How many neurons are required? There are only two categories and single Perceptron enough to distinguish apples and bananas. Vector inputs are three-dimensional (R=3). Perceptron equation a = hardlims w 1 1 p 1, w 1, 2 w 1, 3 p 2 p 3 + b Goal choose the bias b and elements of the weight matrix so that the perceptron will be able to distinguish between apples and bananas. 6

Apple/Banana Example p 3 p 1 p 2 p 2 (apple) p 1 (banana) The bias determines the position of the boundary. The decision boundary should separate the prototype vectors (symmetrically). (p 1 = 0) Weight vector should be orthogonal to decision boundary point in the direction of the vector which should produce an output of 1. 7

Apple/Banana Example p 1 p 2 p 3 Weight vector [-1 0 0] T Output 1 for bananas Bias b = 0 Equation of the decision boundary p 2 (apple) p 1 (banana) p 1 1 0 0 p 2 + 0 = 0 p 3 8

Testing the Network Banana: 1 a = hardlims 1 0 0 1 + 0 = 1 1( b anana) Apple: 1 a = hardlims 1 0 0 1 + 0 = 1 1( apple) Rough Banana: 1 a = hardlims 1 0 0 1 + 0 = 1 1( b anana) 9

Summary Designed the network graphically. What about the problems with high dimensional input spaces? Learning algorithms to train networks by using a set of examples. 10

Hamming Network Designed to solve binary pattern recognition problems. Uses both feed-forward forward an recurrent (feed-back) layers. Objective To decide which prototype vector is closest to the input vector. This decision is indicated by the output of the recurrent layer. 11

Hamming Network 12

Hamming Network Number of neurons in the first layer = Number of neurons in the second layer There is one neuron in the recurrent layer for each prototype pattern. Only one neuron produces a nonzero output when the recurrent layer converges. This neuron indicates the prototype pattern that is closest to the input vector. 13

Feedforward Layer Performs a correlation, or inner product, between each of prototype patterns and the input pattern. Weight matrix set to the prototype patterns Each element of the bias vector is equal to R,, where R is the number of elements in the input vector. For Apple/Banana example S = 2 W 1 p 1 T = = 1 1 1 b 1 = R = 1 1 1 R p 2 T 3 3 14

Feedforward Layer Output of the feedforward layer p 1 T a 1 = W 1 p + b 1 = p + 3 = T p 3 2 p T 1 p + 3 T p 2p + 3 Inner product of two vectors Largest when vectors point in the same direction. Smallest when they point in the opposite directions. Adding R to the inner product guarantee that outputs of the feedforward layer can never be negative. The neuron with the largest output will correspond to the prototype pattern that is closest in Hamming distance to input pattern. 15

Hamming Distance The Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. E.g. The Hamming distance between: 1011101 and 100100101 is 2. 2173896 and 2233796 is 3. "toned"" and "roses"" is 3. 16

Recurrent Layer Known as a competitive layer. The neurons are initialized with the outputs of the feedforward layer. Compete with each other (neuron) to determine a winner. After the competition only one neuron will have a non- zero output. The winning neuron indicates which category of input is presented to the net. 17

W 2 = 1 ε ε ε 1 Recurrent Layer The weight matrix of the recurrent layer < 1 ------ S 1 S no of neurons in the recurrent layer An iteration of the recurrent layer proceeds as follows: ( ) poslin 1 ε a 2 t + 1 a 2 = ( t) = ε 1 poslin a 1 2 t 2 ( ) εa2( t) 2 2 a 2( t) εa1( t) Each element is reduced by the same fraction of the other. The larger element will be reduced by less, and the smaller element will be reduced by more. Output of each neuron become zero except the one with the largest initial value. 18

Hamming Operation First Layer Input (Rough Banana) p = 1 1 1 1 a 1 = 1 1 1 1 + 3 = ( 1 + 3) = 1 1 1 3 ( 1 + 3) 1 4 2 19

Hamming Operation Second Layer a 2 ( 1) = poslin( W 2 a 2 ( 0) ) = poslin 1 0.5 4 0.5 1 2 3 poslin 3 = 0 0 a 2 ( 2) = poslin( W 2 a 2 ( 1) ) = poslin 1 0.5 3 0.5 1 0 poslin 3 = 3 1.5 0 20

Exercise Suppose that we want to distinguish between bananas and pineapples: p 1 = [-1 1-1] (Banana) p 2 = [-1-1 1] (Pineapple) (i) Design a perceptron to recognize these patterns. (ii) Design a Hamming network to recognize these patterns. 21

Lab Work - 1 Design a Hamming Network to recognize the following Arabic Numerals. 22

Lab Work 1 Write a Matlab program to obtain the output of the designed Hamming Network. Test the designed network with the following patterns. 23