International joint conference on neural networks — IJCNN 90: June 17–21, 1990, San Diego, CA,...

3
Neurocomputing 2 ( 1990/91 ) 131-133 131 Elsevier Conference report International Joint Conference on Neural Networks- IJCNN 90 June 17-21, 1990, San Diego, CA, USA R.C. Johnson Cognizer Connection, 333 S. State Street, Suite V 141, Portland, OR 97034, USA The International Joint Conference on Neural Networks in Summer 1990 witnessed new mi- crochips, new systems and a hybrid approach that enlists the help of fuzzy logic. Perhaps the most interesting development at the conference was a new microchip architecture from Adaptive Solutions Inc. (Beaverton, OR). This chip set is the first general-purpose micro- processor for neural applications. Adaptive Solutions has taken an all-digital ap- proach to simulating any neural-network model with downloadable microcode. The two-chip set runs at 12.8 billion connections per second (CPS) using one-bit synaptic weights and at 1.6 billion CPS using 16-bit weights. It learns at from 250 million connection updates per second (CUPS) to 300M CUPS, about 100 times faster than current neural-accelerator boards. Adaptive Solutions uses a virtual node concept to accomplish on-chip learning. A single layer of 64 neurons is time-division multiplexed among the various layers in a particular neural architec- ture. The user's chosen learning method is dow- nloaded into a writable control store on the microscquenccr chip. That one chip controls any number of processing chips by issuing 31-bit wide microcode to them. Each neuron has 4 kybtes of memory yielding a total of 2M 1-bit connections, 256k 8-bit connections or 128k 16-bit connections for each processing chip. Adaptive Solutions is using a standard CMOS fabrication technology, but will not reveal the identity of the company manufacturing the chips for it (although both Mitsubishi and Sharp have resident engineers in its offices, reportedly work- ing on 'applications' of the chip set). Accelerator cards At the conference several new neural-network accelerators boards were introduced. Neurocom- puting Ltd. showed its Astracards for the PC. For the Macintosh there was a system from Ornicon Corp., for Compaq there was one from Ford Aerospace Corp. Neurocomputing Ltd.'s (Alresford, England) PC-based boards were developed in conjunction with Cambridge University. The resulting line of Astracards acts as neural-network accelerator boards for AT-bus PCs. The neural accelerators process from 16 to 33 million CPS in feed- forward or 3- to 5-million CUPS while learning. 0925-2312/91/$03.50 © 1991 - Elsevier Science Publishers B.V.

Transcript of International joint conference on neural networks — IJCNN 90: June 17–21, 1990, San Diego, CA,...

Neurocomputing 2 ( 1990/91 ) 131-133 131 Elsevier

Conference report

International Joint Conference on Neural

N e t w o r k s - IJCNN 90

June 17-21, 1990, San Diego, CA, USA

R . C . J o h n s o n Cognizer Connection, 333 S. State Street, Suite V 141, Portland, OR 97034, USA

The International Joint Conference on Neural Networks in Summer 1990 witnessed new mi- crochips, new systems and a hybrid approach that enlists the help of fuzzy logic.

Perhaps the most interesting development at the conference was a new microchip architecture from Adaptive Solutions Inc. (Beaverton, OR). This chip set is the first general-purpose micro- processor for neural applications.

Adaptive Solutions has taken an all-digital ap- proach to simulating any neural-network model with downloadable microcode. The two-chip set runs at 12.8 billion connections per second (CPS) using one-bit synaptic weights and at 1.6 billion CPS using 16-bit weights. It learns at from 250 million connection updates per second (CUPS) to 300M CUPS, about 100 times faster than current neural-accelerator boards.

Adaptive Solutions uses a virtual node concept to accomplish on-chip learning. A single layer of 64 neurons is time-division multiplexed among the various layers in a particular neural architec- ture. The user's chosen learning method is dow- nloaded into a writable control store on the microscquenccr chip. That one chip controls any number of processing chips by issuing 31-bit wide microcode to them. Each neuron has 4 kybtes of

memory yielding a total of 2M 1-bit connections, 256k 8-bit connections or 128k 16-bit connections for each processing chip.

Adaptive Solutions is using a standard CMOS fabrication technology, but will not reveal the identity of the company manufacturing the chips for it (although both Mitsubishi and Sharp have resident engineers in its offices, reportedly work- ing on 'applications' of the chip set).

Accelerator cards

At the conference several new neural-network accelerators boards were introduced. Neurocom- puting Ltd. showed its Astracards for the PC. For the Macintosh there was a system from Ornicon Corp., for Compaq there was one from Ford Aerospace Corp.

Neurocomputing Ltd. 's (Alresford, England) PC-based boards were developed in conjunction with Cambridge University. The resulting line of Astracards acts as neural-network accelerator boards for AT-bus PCs. The neural accelerators process from 16 to 33 million CPS in feed- forward or 3- to 5-million CUPS while learning.

0925-2312/91/$03.50 © 1991 - Elsevier Science Publishers B.V.

132 R.C. Johnson

The Astracard works with Nestor's software, NeuralWare 's NeuralWorks, Olmstead & Wat- kins C-language libraries and Neurun - a graphi- cal backpropagation network development en- vironment. Astracards can also be linked to tra- ditional expert system tool-kits as a prototyping aid or for final deployment.

So far there are two Astracards, one using TI's 320C30 digital-signal processor (DSP) and one using Intel's i860 DSP. They are also working on a board using Neural Semiconductor 's microch- ips. The TI-based card achieves 33 MFLOPS while Intel's tops at 66 MFLOPS. Both plug into the A T bus and run under MS-DOS, Windows or Unix. The TI-based card processes 16 million CPS or while learning 3 million CUPS. Its total capacity is 3 million neurons and connects. The Intel-based card can process 33 million CPS (5 CUPS) and control a total net size of 8 million neurons and connections.

Both Astracards support Microsoft's Fortran and C-language compilers. Both can loan their microchip memories to the host PC, for RAM- disks or memory expansion, when they are not being used for neural simulations.

Ornicon Corp. (San Diego, CA) offered its S o n n e t - a System for Operating Neural Nets. Sonnet is based on Mac II host with neural- and signal-processing cards plugged into it. Ornicon recommends the system for signal detection and classification, industrial inspection, diagnostic imaging, reconnaissance, surveillance and pro- cess control.

Sonnet can perform pattern classification using a realtime front-end signal preprocessing and feature extractor plus a back-end neural network simulation also running in realtime. A graphical editor and on-screen electronic notebook helps define network topology and direct the learning process. The training method used is backpropa- gation of errors with user-variable training rates that can be used like simulated annealing. Train- ing can terminate after a certain number of passes or when a mean-squared level has been reached. After training Sonnet can classify in- coming signals in realtime issuing alarms when it

detects certain patterns. Systems designed with Sonnet can also be accelerated further with Neural Semiconductor 's microchips.

The base system comes with a signal process- ing board sporting a 30 kHz sample rate. It can perform preprocessing tasks, such as from 64- to 8092-point FFTs, continuously in realtime. It can accept traditional analog inputs as well as digital inputs from a file, TCP/IP network or RS-422 port.

Ford Aerospace Corp. (Houston, TX) offered its Neural Emulation Tool (NET) , a hardware simulator based on a Compaq host computer with Texas Instrument 's TMS320C30 digital sig- nal processor (DSP) on boards in its backplane. The five-slot backplane can be populated with from 5 to 17 DSPs. The system architecture provides for both private and common memory areas for easy construction of neural simulations.

In operation the Ford neural simulator runs at 147 million CPS when doing backpropagation of errors in feedforward mode with four processor boards in parallel. A single board can achieve 40 million CPS doing backpropagation of errors in feedforward mode. In training one board learns at 4.2 million CUPS for a processor board learn- ing a large backpropagation network. Four pro- cessor boards learn at 15.6 million CUPS. The Compaq host can be used foreground processing (wordprocessor, spreadsheet, or other work) while a large network is training in the back- ground.

Ford supplies the source code for a backpropa- gation of errors simulation. It also supports other methods including counterpropagation, adaptive resonance (ART-1 and -2), a feature map clas- sifier and the Barto Arp with spatio-temporal networks planned, A C-compiler is also available for its boards.

Fuzzy logic

At the conference, one company was demon- strating a hybrid system using both neural net-

Con lorence report 133

works and fuzzy logic. Togai InfraLogic Inc. (Irvine, CA) used the fuzzy adaptive memory (FAM) concept invented by University of South- ern California 's Professor Bart Kosko.

Geometr ical ly , a fuzzy set maps to a point in the unit hypercube. The corners of the hy- percube are the crisp points of traditional logic, whereas the points inside the cube correspond to fuzzy sets. Each dimension of a given hypercube corresponds to a pa ramete r of the problem. Usu- ally, a problem has a given input space and an output space, defined geometrically as two fuzzy system hypercubes. A given fuzzy system is a mapping between these hypercubes.

FAMs contain the maps between these fuzzy hypercubes. The FAM operates like a trained neural network in the sense that it maps points between an input space and an output space with a parallel operat ing network. But a neural net- work learns its mapping from example input- output pairs, whereas a fuzzy system is specified manually.

Kosko has modified the FAM's t ransformation maps to enlist the help of a neural learning method. In effect, he can reverse-engineer a fuzzy system's rules with a neural network front- end.

FAMs solve tlhe neural training di lemma - too little training and a neural network is unreliable when dealing with items outside its training set, but too much training and it might as well be a look-up table of every possible input-output pair.

FAMs learn in one pass over the training data, but not like a look-up table. Rather like a set of weighted principles that can be inspected and perfected one by one.

For control applications, human or automatic controllers generate a typical s tream of input- output data. Adapt ive FAMs convert this data to weighted FAM rules using a neural network learning method. The fuzzy system learns which control inputs cause which control outputs and the number of examples of each transformation in a training data set determines the weight accorded to each FAM rule.

Fuzzy only

At the conference, HyperLogic Corp. (Escon- dido, CA) showed a decision system for natural- language users that uses fuzzy logic. Like a fuzzy expert-system shell, CubiCalc allows a set of rules about an application to be written and then evaluated against realtime inputs, but in ordinary language rather than by a knowledge engineer.

HyperLogic , founded by Fred Watkins, thc prin- cipal in the Olmstead and Watkms neural net- work company, designed Cubicalc for ordinary users. It functions as a user-programmable gcnc- tic application to all appearances. But unlike other such generic applications like spreadsheets, fuzzy logic can be used to specify approximate rules-of-thumb rather than specific I'ormulae.

A CubiCalc system consists of a set of fuzzy rules about an application written in a language customized for the specific application. Fo create one, first you break an application into variables each with a set of adjective modiliers about the rangc of values they may take on.

You specify the variables that can be measured by typing in their names and ranges in a lill-in- the-blank screen. Then you define the adjectives, which are the fuzzy categories into which the variables will be cast. Each range gradually blends into the adjacent one with a uscr-speciti- able speed. After the variables and adjectives are defined you enter as many rules about the situation as you know.

Any number of such rules can be entered and simultaneously compared against the current situation in realtime. A CubiCalc fuz:,v system gets its inputs from disk files of historical data. such as reading the past movement of stock indices (useful for fine-tuning rules), or from realtime inputs from the outside world, such as t empera ture sensors.

As a precursor to using live data, a simulation mode is provided so that an artilicial environ- ment can test out a newly formed rule ,,et. In the artificial environment , all the variables of the real environment are modeled over their full ranges in the fuzzy system.