NN07

March 19, 2018 | Author: Yavor D Ivanov | Category: Systems Theory, Algorithms, Systems Science, Cognitive Science, Psychology & Cognitive Science


Comments



Description

НЕВРОННИ МРЕЖИ2007 ДОЦ. Йордан Колев НЕВРОННИ МРЕЖИ • Изкуствените невронните мрежи (ИНМ или НМ) са съставени от паралелно работещи елементи наподобяващи биологичните нервни клетки. НМ има входове и изходи. • Функциите на изкуствените невронни мрежи са определени основно от връзките между съставящите ги елементи. Съвокупността от елементи и връзки между тях определят структурата на НМ. • За всички връзки се задават тегловни коефициенти на сигналите предавани между елементите. Съвокупността от тегловните коефициенти се нарича още параметри на НМ. 2007 Доц. Йордан Колев НЕВРОННИ МРЕЖИ Скрити слоеве (1 и 2) 2007 Изходен слой (3) Доц. Йордан Колев Обучението се извършва чрез подбор (настройка) на тегловните коефициенти на НМ. Йордан Колев .НЕВРОННИ МРЕЖИ Структурата на NN се избира според конкретното приложение и желаната функция. при което определена съвокупност от входни сигнали (входен вектор) предизвиква извеждането на определен изходен сигнал (изходен вектор). • Обучението има за цел да достигне такова функциониране на НМ. • 2007 Доц. • За да функционира според предназначението си NN трябва да бъде обучена ( изключения). • В процеса на обучение се използува множество такива двойки вход/изход.НЕВРОННИ МРЕЖИ Обучението се извършва чрез сравнение на текущото състояние на изхода (при даден вход) с желаното (целта) и промяна на тегловните коефициенти докато изходът съвпадне с целта. • 2007 Доц. Йордан Колев . Йордан Колев .НЕВРОННИ МРЕЖИ 2007 Доц. This device.s beginning in about 1984 with the adaptive channel equalizer. A list of some applications mentioned in the literature follows:. and a risk analysis system. a process monitor. which is an outstanding commercial success. a sonar classifier. is a singleneuron network used in long distance telephone systems to stabilize voice signals. including a small word recognizer.НЕВРОННИ МРЕЖИ Neural Network Applications • The 1988 DARPA Neural Network Study [DARP88] lists various neural network application. Йордан Колев . 2007 Доц. The DARPA report goes on to list other commercial applications. • Neural networks have been applied in many other fields since the DARPA report was written. warranty activity analysis. facial recognition. Weapon steering. aircraft component fault detection. aircraft component simulation. sonar. • Defense. aircraft control systems.НЕВРОННИ МРЕЖИ • Aerospace. object discrimination. flight path simulation. 2007 Доц. • Automotive. Automobile automatic guidance system. autopilot enhancements. radar and image signal processing including data compression. High performance aircraft autopilot. • Banking. credit application evaluation. signal/image identification. feature extraction and noise suppression. target tracking. Check and other document reading. Йордан Колев . new kinds of sensors. Code sequence prediction. Entertainment. Policy application evaluation. portfolio trading program. mortgage screening. chip failure analysis. process control. voice synthesis. currency price prediction Insurance. Oil and Gas. Animation. loan advisor. market forecasting. corporate bond rating.НЕВРОННИ МРЕЖИ • • • • • Electronics. credit line use analysis. nonlinear modeling. Йордан Колев 2007 . Real estate appraisal. corporate financial analysis. special effects. Financial. integrated circuit chip layout. product optimization. Доц. Exploration. machine vision. Йордан Колев 2007 . Доц. Manufacturing process control. project bidding. visual quality inspection systems. welding quality analysis. EEG and ECG analysis. Breast cancer cell analysis. paper quality prediction. hospital expense reduction. dynamic modeling of chemical process system. chemical product design analysis. emergency room test advisement. real-time particle identification. machine maintenance analysis. prosthesis design. optimization of transplant times. product design and analysis. process and machine diagnosis.НЕВРОННИ МРЕЖИ • • • Manufacturing. forklift robot. manipulator controllers. Medical. Trajectory control. Robotics. beer testing. planning and management. analysis of grinding operations. computer chip quality analysis. vision systems. hospital quality improvement. Market analysis. Image and data compression. vehicle scheduling. automatic bond rating. routing systems. customer payment processing systems. 2007 Доц. real-time translation of spoken language. stock trading advisory systems. vowel classification. Йордан Колев . Telecommunications. automated information services.НЕВРОННИ МРЕЖИ • • • • Speech. text to speech synthesis. Securities. speech compression. Truck brake diagnosis systems. Speech recognition. Transportation. It is hoped that the NN toolbox of Matlab will be useful for neural network educational and design purposes within a broad field of neural network applications. the money that has been invested in neural network software and hardware. 2007 Доц. and the depth and breadth of interest in these devices have been growing rapidly. Йордан Колев .НЕВРОННИ МРЕЖИ • Summary • The list of additional neural network applications. Йордан Колев .НЕВРОННИ МРЕЖИ 2007 Доц. НЕВРОННИ МРЕЖИ 2007 Доц. Йордан Колев . НЕВРОННИ МРЕЖИ 2007 Доц. Йордан Колев . НЕВРОННИ МРЕЖИ 2007 Доц. Йордан Колев . Йордан Колев .НЕВРОННИ МРЕЖИ Feedforward Network 2007 Доц. Йордан Колев .НЕВРОННИ МРЕЖИ Скрити слоеве (1 и 2) 2007 Изходен слой (3) Доц. НЕВРОННИ МРЕЖИ Статична мрежа с паралелен вход 2007 Доц. Йордан Колев . НЕВРОННИ МРЕЖИ Динамична мрежа с последователен вход 2007 Доц. Йордан Колев . Йордан Колев .НЕВРОННИ МРЕЖИ FIR Adaptive Filter 2007 Доц. НЕВРОННИ МРЕЖИ Summary • The inputs to a neuron include its bias and the sum of its weighted inputs (using the inner product). However. the number of neurons in each layer. • A single neuron cannot do very much. 2007 Доц. and how the layers are connected to each other. several neurons can be combined into a layer or multiple layers that have great power. each layer’s transfer function. The output of a neuron depends on the neuron’s inputs and on its transfer function. Йордан Колев . • The architecture of a network consists of a description of how many layers a network has. There are many useful transfer functions. The best architecture to use depends on the type of problem to be represented by the network. the more neurons in a hidden layer the more powerful the network. single layer networks cannot solve certain problems.НЕВРОННИ МРЕЖИ • Except for purely linear networks. For example. 2007 Доц. Use of a nonlinear transfer function makes a network capable of storing nonlinear relationships between input and output. Multiple feed-forward layers give a network greater freedom. any reasonable function can be represented with a two layer network: a sigmoid layer feeding a linear output layer. • A very simple problem may be represented by a single layer of neurons. linear neurons should be used. Йордан Колев . linear networks cannot perform any nonlinear computation. However. • If a linear mapping needs to be represented. However. (For example. they may be presented sequentially or concurrently.НЕВРОННИ МРЕЖИ • Networks with biases can represent relationships between inputs and outputs more easily than networks without biases. However. 2007 Доц. a neuron without a bias will always have a net input to the transfer function of zero when all of its inputs are zero. More complex networks with internal feedback paths are required for temporal behavior.) • Feed-forward networks cannot perform temporal computation. a neuron with a bias can learn to have any net transfer function input under the same conditions by learning an appropriate value for the bias. • If several input vectors are to be presented to a network. Йордан Колев . НЕВРОННИ МРЕЖИ Съществуват много методи за обучение. които се отличават по начина за настройка на параметрите на НМ. Йордан Колев . в която се приема. от там по броя на изчислителните операции и съответно по скоростта на сходимост към точката. че изходът на НМ е достатъчно близо до целта. които се проектират директно и не се нуждаят от обучение • 2007 Доц. • Съществуват и класове НМ. например линейни НМ или НМ на Хопфилд. НЕВРОННИ МРЕЖИ We will define a learning rule as a procedure for modifying the weights and biases of a network. {pQ.t2} . 2007 Доц. • In supervised learning.. and is the corresponding correct (target) tq output. Learning rules in Matlab toolbox fall into two broad categories: supervised learning and unsupervised learning..... the learning rule is provided with a set of examples (the training set) of proper network behavior: {p1.tQ} where pq is an input to the network.) The learning rule is applied to train the network to perform some particular task. t1} . (This procedure may also be referred to as a training algorithm. {p2.. Йордан Колев . They categorize the input patterns into a finite number of classes. Йордан Колев . Most of these algo-rithms perform clustering operations. the network outputs are compared to the targets. This is especially useful in such applications as vector quantization.НЕВРОННИ МРЕЖИ As the inputs are applied to the network. The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets. 2007 Доц. There are no target outputs available. • In unsupervised learning. the weights and biases are modified in response to network inputs only. Йордан Колев .НЕВРОННИ МРЕЖИ Backpropagation Algorithm There are many variations of the backpropagation algorithm. 2007 Доц. The simplest implementation of backpropagation learning updates the network weights and biases in the direction in which the performance function decreases most rapidly – the negative of the gradient. gk is the current gradient. One iteration of this algorithm can be written xk + 1 = xk – akgk where xk is a vector of current weights and biases. and ak is the learning rate. 2007 Доц. Йордан Колев . the gradient is computed and the weights are updated after each input is applied to the network. In the incremental mode.НЕВРОННИ МРЕЖИ There are two different ways in which this gradient descent algorithm can be implemented: incremental mode and batch mode. In the batch mode all of the inputs are applied to the network before the weights are updated. Йордан Колев .НЕВРОННИ МРЕЖИ Перцептрон 2007 Доц. Йордан Колев .НЕВРОННИ МРЕЖИ 2007 Доц. Йордан Колев .НЕВРОННИ МРЕЖИ 2007 Доц. НЕВРОННИ МРЕЖИ Summary • Perceptrons are useful as classifiers. Or you might use multiple perceptrons in multiple layers. Convergence is guaranteed in a finite number of steps providing the perceptron can solve the problem. you can use other kinds of networks such as linear networks or backpro-pagation networks. • Single-layer perceptrons can solve problems only when data is linearly separable. Йордан Колев . Alternatively. 2007 Доц. This is seldom the case. They can classify linearly separable input vectors very well. One solution to this difficulty is to use a pre-processing method that results in linearly separable vectors. which can classify nonlinearly separable input vectors. If the vectors are not linearly separable. perceptrons can only classify linearly separable sets of vectors. 2007 Доц. • Second. If a straight line or a plane can be drawn to separate the input vectors into their correct categories.НЕВРОННИ МРЕЖИ Perceptron networks have several limitations. Йордан Колев . the output values of a perceptron can take on only one of two values (0 or 1) due to the hard limit transfer function. • Note. perceptrons trained adaptively will always find a solution in finite time. the input vectors are linearly separable. learning will never reach a point where all vectors are classified properly. • First. that it has been proven that if the vectors are linearly separable. however. НЕВРОННИ МРЕЖИ ADALINE (Adaptive Linear Neuron networks) 2007 Доц. Йордан Колев . НЕВРОННИ МРЕЖИ MADALINE (Multiple Neuron Adaptive Filters) 2007 Доц. Йордан Колев . This property holds because the error surface of a linear network is a multi-dimensional parabola. Йордан Колев . a gradient descent algorithm (such as the LMS rule) must produce a solution at that minimum. 2007 Доц. Thus ADALINEs cannot find solutions to some problems. The network will find as close a solution as is possible given the linear nature of the network’s architecture. the ADALINE will minimize the sum of squared errors if the learning rate lr is sufficiently small. even if a perfect solution does not exist. However. • Multiple layers in a linear network do not result in a more powerful network so the single layer is not a limitation. However.НЕВРОННИ МРЕЖИ Summary • ADALINEs may only learn linear relationships between input and output vectors. Since parabolas have only one minimum. linear networks can solve only linear problems. Йордан Колев .НЕВРОННИ МРЕЖИ 2007 Доц. Documents Similar To NN07Skip carouselcarousel previouscarousel nextNeural Network Fingerprint Classi CationA Trading System for FTSE-100 FuturesAbu-Mostafa et al. - 2012 - Learning from data a short course.pdf1. What Are ANNs? What Are Their Characteristics, Applications, Advantages07cp18 Neural Networks and Applications 3 0 0 100Neural Networks to Discover Image Patterns42-415Modeling of Neural Image Compression Using GA and BP a Comparative ApproachProject ReportPerceptronSlide 03Format svm notesModified PaperLecture on NN1-s2.0-S0957415812001006-mainpart2.pptArtificial Neural Network Lecture 4Hemispheric Asymmetry 2008JooneCompleteGuideAdpt PP CSTR matlabIntroduction to Neural NetworkBoosting Knn TR 2008 03A08_350120140506016Cognitive Memory1(SpringerBriefs in Computer Science) M.N. Murty, Rashmi Raghava (Auth.)-Support Vector Machines and Perceptrons_ Learning, Optimization, Classification, And Application to Social Networks-Springer IntSOFT COMPUTING ASSIGNMENT30238601Best Books About Systems TheoryAutomation Made Easy: Everything You Wanted to Know about Automation-and Need to Askby Peter G. Martin and Gregory HaleOptimization Tools for Logisticsby Jean-Michel RéveillacPlant Hazard Analysis and Safety Instrumentation Systemsby Swapan BasuThermodynamics Problem Solverby The Editors of REA and Ralph PikeProblems in Metallurgical Thermodynamics and Kinetics: International Series on Materials Science and Technologyby G. S. Upadhyaya and R. K. DubeCollaborative Process Automation Systemsby Martin HollenderFooter MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK
Copyright © 2024 DOKUMEN.SITE Inc.