From rao@cs.rochester.edu Sun May 26 06:08:03 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id GAA28068 for ; Sun, 26 May 1996 06:07:52 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id GAA05999 for ; Sun, 26 May 1996 06:07:50 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa06174; 26 May 96 4:49:32 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa06158; 26 May 96 4:25:34 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa09946; 26 May 96 4:24:49 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa09887; 25 May 96 21:14:29 EDT Received: from cayuga.cs.rochester.edu by CS.CMU.EDU id aa21861; 25 May 96 21:13:49 EDT Received: from skunk.cs.rochester.edu (skunk.cs.rochester.edu [192.5.53.143]) by cayuga.cs.rochester.edu (8.6.9/H) with ESMTP id VAA22201; Sat, 25 May 1996 21:13:36 -0400 Received: (from rao@localhost) by skunk.cs.rochester.edu (8.6.9/K) id VAA24390; Sat, 25 May 1996 21:13:35 -0400 Date: Sat, 25 May 1996 21:13:35 -0400 From: rao@cs.rochester.edu Message-Id: <199605260113.VAA24390@skunk.cs.rochester.edu> To: connectionists@cs.cmu.edu, neuron@cattell.psych.upenn.edu psyc@pucc.princeton.edu, cogneuro@ptolemy-ethernet.arc.nasa.gov, cvnet@skivs.ski.org, inns-l%umdd.bitnet@pucc.princeton.edu, neuronet@tutkie.tut.ac.jp, vision-list@teleosresearch.com Subject: Papers available: Dynamic Models of Visual Recognition The following two papers on dynamic cortical models of visual recognition are now available for retrieval via ftp. Comments/suggestions welcome, -Rajesh Rao (rao@cs.rochester.edu) =========================================================================== A Class of Stochastic Models for Invariant Recognition, Motion, and Stereo Rajesh P.N. Rao and Dana H. Ballard (Submitted to NIPS*96) Abstract We describe a general framework for modeling transformations in the image plane using a stochastic generative model. Algorithms that resemble the well-known Kalman filter are derived from the MDL principle for estimating both the generative weights and the current transformation state. The generative model is assumed to be implemented in cortical feedback pathways while the feedforward pathways implement an approximate inverse model to facilitate the estimation of current state. Using the above framework, we derive stochastic models for invariant recognition, motion estimation, and stereopsis, and present preliminary simulation results demonstrating recognition of objects in the presence of translations, rotations and scale changes. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/invar.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/invar.ps.Z 7 pages; 430K compressed. ========================================================================== Dynamic Model of Visual Recognition Predicts Neural Response Properties In The Visual Cortex Rajesh P.N. Rao and Dana H. Ballard (Neural Computation - in review) Abstract The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this paper, we describe a stochastic network model of visual recognition that explains these experimental observations by using a hierarchical form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a stochastic learning rule also derived from the MDL principle. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/dynmem.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/dynmem.ps.Z 32 pages; 534K compressed. =========================================================================== From koza@CS.Stanford.EDU Sun May 26 06:08:05 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id GAA28072 for ; Sun, 26 May 1996 06:07:53 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id GAA06001 for ; Sun, 26 May 1996 06:07:51 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab06174; 26 May 96 4:50:23 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa06160; 26 May 96 4:27:13 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa09953; 26 May 96 4:26:25 EDT Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa11720; 26 May 96 2:35:16 EDT Received: from Sunburn.Stanford.EDU by MAILBOX.SRV.CS.CMU.EDU id aa06473; 26 May 96 2:34:18 EDT Received: (from koza@localhost) by Sunburn.Stanford.EDU (8.7.1/8.7.1) id XAA17882; Sat, 25 May 1996 23:32:04 -0700 (PDT) Posted-Date: Sat, 25 May 1996 23:32:04 -0700 (PDT) Date: Sat, 25 May 96 23:32:04 PDT From: John Koza To: neuron-request@CATTELL.psych.upenn.edu, Reinforce@cs.uwa.edu.au Cc: cells@tce.ing.uniroma1.it, connectionists@MAILBOX.SRV.CS.CMU.EDU Subject: GP is competitive with humans on 4 problems Message-ID: We have fixed the problem and the following paper is now available in Post Script. Four Problems for which a Computer Program Evolved by Genetic Programming is Competitive with Human Performance ABSTRACT: It would be desirable if computers could solve problems without the need for a human to write the detailed programmatic steps. That is, it would be desirable to have a domain-independent automatic programming technique in which "What You Want Is What You Get" ("WYWIWYG" - pronounced "wow-eee-wig"). Genetic programming is such a technique. This paper surveys three recent examples of problems (from the fields of cellular automata and molecular biology) in which genetic programming evolved a computer program that produced results that were slightly better than human performance for the same problem. This paper then discusses the problem of electronic circuit synthesis in greater detail. It shows how genetic programming can evolve both the topology of a desired electrical circuit and the sizing (numerical values) for each component in a crossover (woofer and tweeter) filter. Genetic programming has also evolved the design for a lowpass filter, the design of an amplifier, and the design for an asymmetric bandpass filter that was described as being difficult-to-design in an article in a leading electrical engineering journal. John R. Koza Computer Science Department 258 Gates Building Stanford University Stanford, California 94305 E-MAIL: Koza@CS.Stanford.Edu Forrest H Bennett III Visiting Scholar Computer Science Department Stanford University E-MAIL: Koza@CS.Stanford.Edu David Andre Visiting Scholar Computer Science Department Stanford University E-MAIL: fhb3@slip.net Martin A. Keane Econometrics Inc. Chicago, IL 60630 Paper available in Postscript via WWW from http://www-cs-faculty.stanford.edu/~koza/ Look under "Research Publications" and "Recent Papers" on the home page. This paper was presented at the IEEE International Conference on Evolutionary Computation on May 20-22, 1996 in Nagoya, Japan. Additional papers on evolving electrical circuits will be presented at the GP-96 conference to be held at Stanford University on July 28-31, 1996. For information, see http://www.cs.brandeis.edu/~zippy/gp-96.html From gbugmann@soc.plym.ac.uk Sun May 26 15:17:27 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id PAA01484 for ; Sun, 26 May 1996 15:17:21 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id PAA07762 for ; Sun, 26 May 1996 15:17:19 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa06156; 26 May 96 4:41:51 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa06154; 26 May 96 4:24:43 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa09940; 26 May 96 4:24:17 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa06366; 25 May 96 12:21:24 EDT Received: from soc.plym.ac.uk by CS.CMU.EDU id aa20228; 25 May 96 12:20:52 EDT Received: (from gbugmann@localhost) by soc.plym.ac.uk (8.7.5/8.7.3(17/4/96)) id RAA25811; Sat, 25 May 1996 17:22:05 +0100 (BST) Date: Sat, 25 May 1996 17:22:04 +0100 (BST) From: "Guido.Bugmann xtn 2566" Sender: "Guido.Bugmann xtn 2566" Reply-To: "Guido.Bugmann xtn 2566" Subject: Connectionist Learning - Some New Ideas To: rao@cs.rochester.edu cc: connectionists@cs.cmu.edu In-Reply-To: <199605181814.OAA09391@skunk.cs.rochester.edu> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII On Sat, 18 May 1996 rao@cs.rochester.edu wrote: > >One loses about 100,000 cortical neurons a day (about a percent of > >the original number every three years) under normal conditions. > > Does anyone have a concrete citation (a journal article) for this or > any other similar estimate regarding the daily cell death rate in the > cortex of a normal brain? I've read such numbers in a number of > connectionist papers but none cite any neurophysiological studies that > substantiate these numbers. A similar question (are there references for 1 millions neurons lost per day ?) came up in a discussion on the topic of robustness on connectionists a few years ago (1992). Some of the replies were: ------------------------------------------------------- >From Bill Skaggs, bill@nsma.arizona.edu : There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of the cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons are lost by age 60. Using the standard estimate of ten bilion neurons in the neocortex, this works out to about one hunderd thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey and rodent data" DG Flood & PD Coleman, Neurobiology of Aging, 9, (1988) pp.453-464. -------------------------------------------------------- >From Arshavir Balckwell, arshavir@crl.ucsd.edu : I have come across a brief reference to adult neural death that may be of use, or at least a starting point. The book is: Dowling, J.E. 1992 Neurons and Networks. Cambridge: Harward Univ. In a footnote (!) on page 32, he writes: There is typically a loss of 5-10 percent of brain tissue with age. Assuming a brain loss of 7 percent over a life span of 100 years, and 10^11 neurons (100 billions) to begin with, approximately 200,000 neurons are lost per day. ---------------------------------------------------------------- >From Jan Vorbrueggen, jan@neuroinformatik.ruhr-uni-bochum.de As I remember it, the studies showing the marked reduction in nerve cell count with age were done around the turn of the century. Themethod, then as now, is to obtain brains of deceased persons, fix tehm, prepare cuts, count cells microscopically in those cuts, and then estimate the total number by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramidal cells are turned into that form by fixation. The new study showed that the total water content of the brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percents?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty ceratin that the gist is correct. So the conclusion seems to be that the cell loss with age in the CNS is much lower than generally thought. ---------------------------------------------------------------- >From Paul King, Paul_King@next.com Moshe Abeles in Corticonics (Cambridge Univ. Press, 1991) writes on page 208 that: "Comparisons of neural densities in the brain of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cell die between the ages of twenty and eighty years (Tomlinson and Gibson, 1980). Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5% of the cortical cells die at random...." and goes on to discuss the implications for network robustness. Reference: Gearald H, Tomlinson BE and Gibson PH (1980) "Cell counts in human cerebral cortex in normal adults throughout life using an image analysis computer" J. Neurol., 46, pp. 113-136. ------------------------------------------------------------- >From Robert A. Santiago, rsantiag@note.nsf.gov "In search of the Engram" The problem of robutsness from a neurobiological perspective seems to originate from works done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of the memory and seeing if it affected memory... Other work was done by a gentlemen named Richard F. Thompson. Both speak of the loss of neurons in a system and how integrity was kept. In particular Karl Lashley spoke of the memory as holograms... ------------------------------------------------- Hope it helps... Regards Guido Bugmann ----------------------------- Dr. Guido Bugmann Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom ----------------------------- Tel: (+44) 1752 23 25 66 / 41 Fax: (+44) 1752 23 25 40 Email: gbugmann@soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/Staff/GuidBugm/Bugmann.html ----------------------------- From koza@cs.stanford.edu Mon May 27 10:24:50 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA10681 for ; Mon, 27 May 1996 10:24:44 -0500 Received: from cs.uwa.oz.au (bilby.cs.uwa.oz.au [130.95.1.11]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA12620 for ; Mon, 27 May 1996 10:24:41 -0500 Received: from (mafm@parma.cs.uwa.oz.au [130.95.1.7]) by cs.uwa.oz.au (8.6.8/8.5) with SMTP id VAA14141; Mon, 27 May 1996 21:00:06 +0800 Message-Id: <199605271300.VAA14141@cs.uwa.oz.au> From: John Koza To: colt@cs.uiuc.edu, neuron-request@CATTELL.psych.upenn.edu Cc: Reinforce@cs.uwa.edu.au Subject: GP is competitive with humans on 4 problems Date: Sat, 25 May 96 23:46:30 PDT We have fixed the problem and the following paper is now available in Post Script. Four Problems for which a Computer Program Evolved by Genetic Programming is Competitive with Human Performance ABSTRACT: It would be desirable if computers could solve problems without the need for a human to write the detailed programmatic steps. That is, it would be desirable to have a domain-independent automatic programming technique in which "What You Want Is What You Get" ("WYWIWYG" - pronounced "wow-eee-wig"). Genetic programming is such a technique. This paper surveys three recent examples of problems (from the fields of cellular automata and molecular biology) in which genetic programming evolved a computer program that produced results that were slightly better than human performance for the same problem. This paper then discusses the problem of electronic circuit synthesis in greater detail. It shows how genetic programming can evolve both the topology of a desired electrical circuit and the sizing (numerical values) for each component in a crossover (woofer and tweeter) filter. Genetic programming has also evolved the design for a lowpass filter, the design of an amplifier, and the design for an asymmetric bandpass filter that was described as being difficult-to-design in an article in a leading electrical engineering journal. John R. Koza Computer Science Department 258 Gates Building Stanford University Stanford, California 94305 E-MAIL: Koza@CS.Stanford.Edu Forrest H Bennett III Visiting Scholar Computer Science Department Stanford University E-MAIL: Koza@CS.Stanford.Edu David Andre Visiting Scholar Computer Science Department Stanford University E-MAIL: fhb3@slip.net Martin A. Keane Econometrics Inc. Chicago, IL 60630 Paper available in Postscript via WWW from http://www-cs-faculty.stanford.edu/~koza/ Look under "Research Publications" and "Recent Papers" on the home page. This paper was presented at the IEEE International Conference on Evolutionary Computation on May 20-22, 1996 in Nagoya, Japan. Additional papers on evolving electrical circuits will be presented at the GP-96 conference to be held at Stanford University on July 28-31, 1996. For information, see http://www.cs.brandeis.edu/~zippy/gp-96.html From scotts@cs.wisc.edu Mon May 27 11:44:50 1996 Received: from sun22.cs.wisc.edu (sun22.cs.wisc.edu [128.105.40.22]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id LAA11282 for ; Mon, 27 May 1996 11:44:45 -0500 Received: (from scotts@localhost) by sun22.cs.wisc.edu (8.6.12/8.6.12) id LAA00459; Mon, 27 May 1996 11:44:44 -0500 Date: Mon, 27 May 1996 11:44:44 -0500 From: Scott Swanson Message-Id: <199605271644.LAA00459@sun22.cs.wisc.edu> To: gbugmann@soc.plym.ac.uk Subject: Re: Connectionist Learning - Some New Ideas Cc: ml@cs.wisc.edu i wonder how exactly tomlinson & gibson characterized "normal" individuals when studying neuronal attrition. given the ubiquitous consumption of alcohol in most human cultures, & certainly in western cultures, and given that ethanol is a known neurotoxin, one wonders whether the observed effect has more to do with beer than with "normal" human biology. scott swanson From phkywong@uxmail.ust.hk Mon May 27 14:31:52 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id OAA12598 for ; Mon, 27 May 1996 14:31:47 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id OAA14469 for ; Mon, 27 May 1996 14:31:44 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa08239; 27 May 96 14:03:10 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa08237; 27 May 96 13:41:05 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa11049; 27 May 96 13:40:48 EDT Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa26289; 27 May 96 8:57:44 EDT Received: from uxmail.ust.hk by EDRC.CMU.EDU id aa12569; 27 May 96 8:56:55 EDT Received: from phsu2.ust.hk by uxmail.ust.hk id <18930-8054>; Mon, 27 May 1996 20:56:15 +0800 From: "Dr. Michael Wong" To: connectionists@cs.cmu.edu Subject: Paper available Message-Id: <96May27.205615hkt.18930-8054+221@uxmail.ust.hk> Date: Mon, 27 May 1996 20:56:13 +0800 The following papers, to be presented at ICONIP'96, is now available via anonymous FTP. (5 pages each) ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/robust.ps.gz Neural Dynamic Routing for Robust Teletraffic Control Neural Network Classification of Non-Uniform Data W. K. Felix Lor and K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail addresses: phfelix@usthk.ust.hk, phkywong@usthk.ust.hk ABSTRACT We study the performance of a neural dynamic routing algorithm on the circuit- switched network under critical network situations. It consists of a teacher generating examples for supervised learning in a group of student neural controllers. Simulations show that the method is robust and superior to conventional routing techniques. ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/nonuni.ps.gz Neural Network Classification of Non-Uniform Data K. Y. Michael Wong and H. C. Lau Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail addresses: phkywong@usthk.ust.hk, phhclau@usthk.ust.hk ABSTRACT We consider a model of non-uniform data, which resembles typical data for system faults in diagnostic classification tasks. Phase diagrams illustrate the role reversal of the informators and background as parameters change. With no prior knowledge about the non-uniformity, the Bayesian classifier may perform worse than other neural network classifiers for few examples. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get robust.ps.gz (or get nonuni.ps.gz) ftp> quit unix> gunzip robust.ps.gz (or gunzip nonuni.ps.gz) unix> lpr robust.ps (or lpr nonuni.ps.gz) From jorn@let.rug.nl Tue May 28 21:06:44 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id VAA06688 for ; Tue, 28 May 1996 21:06:31 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id VAA02413 for ; Tue, 28 May 1996 21:06:29 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa09641; 28 May 96 19:54:29 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa09639; 28 May 96 19:28:38 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12245; 28 May 96 19:28:00 EDT Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa05520; 28 May 96 3:38:49 EDT Received: from freya.let.rug.nl by RI.CMU.EDU id aa01875; 28 May 96 3:37:46 EDT Received: from grid.let.rug.nl (modi.let.rug.nl) by freya.let.rug.nl with ESMTP (1.37.109.16/16.2) id AA028329061; Tue, 28 May 1996 09:37:42 +0200 Message-Id: <199605280737.AA028329061@freya.let.rug.nl> Received: by grid.let.rug.nl (1.37.109.16/16.2) id AA284169054; Tue, 28 May 1996 09:37:34 +0200 Date: Tue, 28 May 1996 09:37:34 +0200 From: Jorn Veenstra To: connectionists@cs.cmu.edu Subject: postdoc position in Neurolinguistics POSTDOCTORAL POSITION AVAILABLE The Netherlands Organization for Scientific Research (NWO) will make a THREE YEAR POSTDOCTORAL POSITION AVAILABLE within the project "Neurological Basis of Language" (NWO project 030-30- 431) to a candidate whose Ph.D. research was directed toward computational modelling of cognitive functions (preferably language) based on psychological and neuroanatomical data. The role of the postdoc would be to develop computational models of language processing which employ physiologically plausible assumptions and are compatible with or predict the results of psycholinguistic experimental evidence on the time course and structure of language processing. The goal of the project as a whole is to investigate localization of language functions using positron emission tomography, the time course of language processing using event-related potentials to develop a neurologically plausible model of language. Contact Dr. Laurie A. Stowe, Dept. of Linguistics, Faculty of Letters, University of Groningen, Postbus 716, 9700 AS Groningen, Netherlands, 31 50 636627 or stowe@let.rug.nl for further information. Applications should be accompanied by a curricu- lum vita, two references (direct from referee), and documenta- tion of research experience in the form of published and in progress articles. From lba@inesc.pt Tue May 28 21:06:46 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id VAA06690 for ; Tue, 28 May 1996 21:06:33 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id VAA02415 for ; Tue, 28 May 1996 21:06:30 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab09679; 28 May 96 20:03:43 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id ab09643; 28 May 96 19:29:36 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12256; 28 May 96 19:28:51 EDT Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa10163; 28 May 96 11:55:33 EDT Received: from inesc.inesc.pt by RI.CMU.EDU id aa03385; 28 May 96 11:52:59 EDT Received: from ilusion.inesc.pt by inesc.inesc.pt with SMTP; id AA19235 (/); Tue, 28 May 1996 16:26:13 +0100 Received: from aleph by ilusion.inesc.pt (4.1/OSversion) id AA26815; Tue, 28 May 96 16:26:10 +0100 Sender: Luis.Almeida@inesc.pt Message-Id: <31AB1B11.6EEA4806@inesc.pt> Date: Tue, 28 May 1996 16:26:09 +0100 From: "Luis B. Almeida" Organization: INESC / IST X-Mailer: Mozilla 2.0 (X11; I; SunOS 4.1.3_U1 sun4m) Mime-Version: 1.0 To: connectionists@cs.cmu.edu Subject: paper available Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit The following paper, which will appear in the Proceedings of the IEEE International Conference on Neural Networks 1996, Washington DC, June 1996, is available at ftp://aleph.inesc.pt/pub/lba/icnn96.ps An Objective Function for Independence Goncalo Marques and Luis B. Almeida IST and INESC, Lisbon, Portugal Abstract The problem of separating a linear or nonlinear mixture of independent sources has been the focus of many studies in recent years. It is well known that the classical principal component analysis method, which is based on second order statistics, performs poorly even in the linear case, if the sources do not have Gaussian distributions. Based on this fact, several algorithms take in account higher than second order statistics in their approach to the problem. Other algorithms use the Kullback-Leibler divergence to find a transformation that can separate the independent signals. Nevertheless the great majority of these algorithms only take in account a finite number of statistics, usually up to the fourth order, or use some kind of smoothed approximations. In this paper we present a new class of objective functions for source separation. The objective functions use statistics of all orders simultaneously, and have the advantage of being continuous, differentiable functions that can be computed directly from the training data. A derivation of the class of functions for two dimensional data, some numerical examples illustrating its performance, and some implementation considerations are described. In this electronic version a few typos of the printed version have been corrected. The paper is reproduced with permission from the IEEE. Please read the copyright notice at the beginning of the document. -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba@inesc.pt or luis.almeida@inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From geoff@salk.edu Tue May 28 21:06:47 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id VAA06692 for ; Tue, 28 May 1996 21:06:35 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id VAA02417 for ; Tue, 28 May 1996 21:06:33 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ac09679; 28 May 96 20:04:54 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id ac09643; 28 May 96 19:29:37 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12261; 28 May 96 19:29:19 EDT Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa13980; 28 May 96 15:48:29 EDT Received: from helmholtz.salk.edu by RI.CMU.EDU id aa04658; 28 May 96 15:47:54 EDT Received: from gauss.sdsc.edu (gauss.salk.edu) by salk.edu (4.1/SMI-4.1) id AA26708; Tue, 28 May 96 12:45:03 PDT Date: Tue, 28 May 96 12:45:03 PDT From: Geoff Goodhill Message-Id: <9605281945.AA26708@salk.edu> To: connectionists@cs.cmu.edu Subject: Cell death during embryogenesis Cc: geoff@salk.edu Those following the current thread on cell death may be interested in a recent experimental investigation of this during development of the mouse by Blaschke, Staley & Chun (abstract below). The most striking finding is that at embryonic day 14, 70% of cortical cells are dying. Geoff Goodhill The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 92037 Email: geoff@salk.edu http: www.cnl.salk.edu/~geoff TI: WIDESPREAD PROGRAMMED CELL-DEATH IN PROLIFERATIVE AND POSTMITOTIC REGIONS OF THE FETAL CEREBRAL-CORTEX AU: BLASCHKE_AJ, STALEY_K, CHUN_J NA: UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,NEUROSCI & BIOMED SCI GRAD PROGRAM,9500 GILMAN DR,LA JOLLA,CA,92093 UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,NEUROSCI & BIOMED SCI GRAD PROGRAM,LA JOLLA,CA,92093 UNIV CALIF SAN DIEGO,SCH MED,DEPT PHARMACOL,BIOL GRAD PROGRAM,LA JOLLA,CA,92093 JN: DEVELOPMENT, 1996, Vol.122, No.4, pp.1165-1174 IS: 0950-1991 AB: A key event in the development of the mammalian cerebral cortex is the generation of neuronal populations during embryonic life, Previous studies have revealed many details of cortical neuron development including cell birthdates, migration patterns and lineage relationships, Programmed cell death is a potentially important mechanism that could alter the numbers and types of developing cortical cells during these early embryonic phases, While programmed cell death has been documented in other parts of the embryonic central nervous system, its operation has not been previously reported in the embryonic cortex because of the lack of cell death markers and the difficulty in following the entire population of cortical cells, Here, we have investigated the spatial and temporal distribution of dying cells in the embryonic cortex using an in situ end-labelling technique called 'ISEL+' that identifies fragmented nuclear DNA in dying cells with increased sensitivity, The period encompassing murine cerebral cortical neurogenesis was examined, from embryonic days 10 through 18, Dying cells were rare at embryonic day 10, but by embryonic day 14, 70% of cortical cells were found to be dying, This number declined to 50% by embryonic day 18, and few dying cells were observed in the adult cerebral cortex. Surprisingly, while dying cells were observed throughout the cerebral cortical wall, the majority were found within zones of cell proliferation rather than in regions of postmitotic neurons. These observations suggest that multiple mechanisms may regulate programmed cell death in the developing cortex, Moreover, embryonic cell death could be an important factor enabling the selection of appropriate cortical cells before they complete their differentiation in postnatal life. KP: RHESUS-MONKEY, POSTNATAL-DEVELOPMENT, MIGRATION PATTERNS, REGRESSIVE EVENTS, DNA FRAGMENTATION, NERVOUS-SYSTEM, GANGLION- CELL, VISUAL-CORTEX, MOUSE, RAT WA: PROGRAMMED CELL DEATH, CEREBRAL CORTEX, EMBRYONIC DEVELOPMENT, MOUSE From bap@valaga.salk.edu Wed May 29 11:34:10 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id LAA21319 for ; Wed, 29 May 1996 11:33:54 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id LAA09535 for ; Wed, 29 May 1996 11:33:51 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa10445; 29 May 96 5:22:57 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa10443; 29 May 96 5:08:05 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12966; 29 May 96 5:07:31 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa07736; 29 May 96 3:56:22 EDT Received: from valaga.salk.edu by CS.CMU.EDU id aa11233; 29 May 96 3:55:54 EDT Received: (from bap@localhost) by valaga.salk.edu (8.6.12/8.6.9) id AAA04235; Wed, 29 May 1996 00:55:38 -0700 Date: Wed, 29 May 1996 00:55:38 -0700 From: Barak Pearlmutter Message-Id: <199605290755.AAA04235@valaga.salk.edu> To: connectionists@cs.cmu.edu Subject: Paper Available --- Blind Source Separation Reply-to: Barak.Pearlmutter@alumni.cs.cmu.edu The following paper (which will appear at the 1996 International Conference on Neural Information Processing this Fall) is available as http://www.cnl.salk.edu/~bap/papers/iconip-96-cica.ps.gz A Context-Sensitive Generalization of ICA Barak A. Pearlmutter and Lucas C. Parra Abstract Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing $n$ unknown independent sources through an unknown $n \times n$ mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where standard ICA cannot, including sources with low kurtosis, colored gaussian sources, and sources which have gaussian histograms. Since ICA is a special case of cICA, the MLE derivation provides as a corollary a rigorous derivation of classic ICA. From listerrj@helios.aston.ac.uk Wed May 29 11:34:12 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id LAA21312 for ; Wed, 29 May 1996 11:33:51 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id LAA09529 for ; Wed, 29 May 1996 11:33:49 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa09679; 28 May 96 20:02:48 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa09643; 28 May 96 19:29:33 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12251; 28 May 96 19:28:38 EDT Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa06440; 28 May 96 5:48:28 EDT Received: from email.aston.ac.uk by EDRC.CMU.EDU id aa15270; 28 May 96 5:47:37 EDT Received: from sun.aston.ac.uk (actually host dirac.aston.ac.uk) by email.aston.ac.uk with SMTP (PP); Tue, 28 May 1996 10:50:01 +0100 Message-Id: <11161.199605280946@sun.aston.ac.uk> To: Connectionists@cs.cmu.edu Subject: Postdoctoral Research Fellowship at Aston University X-Mailer: Mew version 1.02 on Emacs 19.29.2 Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Date: Tue, 28 May 1996 10:46:11 +0100 From: Richard Lister ---------------------------------------------------------------------- Neural Computing Research Group ------------------------------- Dept of Computer Science and Applied Mathematics Aston University, Birmingham, UK POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- On-line Learning in Radial Basis Function Networks -------------------------------------------------- *** Full details at http://www.ncrg.aston.ac.uk/ *** The Neural Computing Research Group at Aston is looking for a highly motivated individual for a 2 year postdoctoral research position in the area of `On-line Learning in Radial Basis Function Networks'. The emphasis of the research will be on applying a theoretically well- founded approach based on methods adopted from statistical mechanics to analyse learning in RBF networks. Potential candidates should have strong mathematical and computational skills, with a background in statistical mechanics and neural networks. Conditions of Service --------------------- Salaries will be up to point 6 on the RA 1A scale, currently 15,566 UK pounds. The salary scale is subject to annual increments. How to Apply ------------ If you wish to be considered for this Fellowship, please send a full CV and publications list, including full details and grades of academic qualifications, together with the names of 3 referees, to: Dr. David Saad Neural Computing Research Group Dept. of Computer Science and Applied Mathematics Aston University Birmingham B4 7ET, U.K. Tel: 0121 333 4631 Fax: 0121 333 4586 e-mail: D.Saad@aston.ac.uk (e-mail submission of postscript files is welcome) Closing date: 21 June, 1996. ---------------------------------------------------------------------- From lba@inesc.pt Thu May 30 00:53:41 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id AAA05549 for ; Thu, 30 May 1996 00:53:30 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id AAA19962 for ; Thu, 30 May 1996 00:53:28 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa11168; 29 May 96 14:45:27 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa11165; 29 May 96 14:31:24 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa13285; 29 May 96 14:30:43 EDT Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa10309; 29 May 96 9:11:41 EDT Received: from inesc.inesc.pt by RI.CMU.EDU id aa07766; 29 May 96 9:10:33 EDT Received: from ilusion.inesc.pt by inesc.inesc.pt with SMTP; id AA18812 (/); Wed, 29 May 1996 13:43:15 +0100 Received: from aleph by ilusion.inesc.pt (4.1/OSversion) id AA11234; Wed, 29 May 96 13:43:15 +0100 Sender: Luis.Almeida@inesc.pt Message-Id: <31AC4662.5E652F78@inesc.pt> Date: Wed, 29 May 1996 13:43:14 +0100 From: "Luis B. Almeida" Organization: INESC / IST X-Mailer: Mozilla 2.0 (X11; I; SunOS 4.1.3_U1 sun4m) Mime-Version: 1.0 To: connectionists@cs.cmu.edu Subject: paper available - ftp instructions Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Since some people don't know how to translate the address ftp://aleph.inesc.pt/pub/lba/icnn96.ps that was given for the paper An Objective Function for Independence Goncalo Marques and Luis B. Almeida IST and INESC, Lisbon, Portugal into anonymous ftp commands, I'm giving those commands below: >ftp aleph.inesc.pt Connected to aleph.inesc.pt. 220 aleph FTP server (SunOS 4.1) ready. Name (aleph.inesc.pt:lba): [type 'anonymous' here] 331 Guest login ok, send ident as password. Password: [type your e-mail address here] 230 Guest login ok, access restrictions apply. ftp> cd /pub/lba ftp> get icnn96.ps ftp> bye If the address 'aleph.inesc.pt' cannot be resolved, you can also use >ftp 146.193.2.131 instead of >ftp aleph.inesc.pt Happy downloading! Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba@inesc.pt or luis.almeida@inesc.pt ------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From zhuh@helios.aston.ac.uk Thu May 30 14:19:56 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id OAA16789 for ; Thu, 30 May 1996 14:19:45 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id OAA27870 for ; Thu, 30 May 1996 14:19:43 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa11470; 29 May 96 18:14:03 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa11466; 29 May 96 18:02:34 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa13403; 29 May 96 18:02:09 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa16640; 29 May 96 14:37:25 EDT Received: from email.aston.ac.uk by CS.CMU.EDU id aa03089; 29 May 96 14:35:13 EDT Received: from sun.aston.ac.uk (actually host darwin.aston.ac.uk) by email.aston.ac.uk with SMTP (PP); Wed, 29 May 1996 19:37:20 +0100 From: zhuh Date: Wed, 29 May 1996 19:33:48 +0000 Message-Id: <1840.9605291833@sun.aston.ac.uk> To: connectionists@cs.cmu.edu Subject: Paper: efficient online training of curved models using ancillary statistics Cc: zhuh@helios.aston.ac.uk X-Sun-Charset: US-ASCII content-length: 1558 The following paper is accepted for 1996 International Conference on Neural Information Processing, Hong Kong, Sept. 1996. ftp://cs.aston.ac.uk/neural/zhuh/ac1.ps.Z Using Ancillary Statistics in On-Line Learning Algorithms Huaiyu Zhu and Richard Rohwer Neural Computing Research Group Dept of Comp. Sci. Appl. Math. Aston Univ., Birmingham B4 7ET, UK Abstract Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so on-line learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics and adaptive critics. At each point estimate an auxiliary flat model (exponential family) is built to locally accommodate both the usual statistic (tangent to the model) and an ancillary statistic (normal to the model). The auxiliary model plays a role in determining credit assignment analogous to that played by an adaptive critic in solving temporal problems. The method is illustrated with the Cauchy model and the algorithm is proved to be asymptotically efficient. -- Huaiyu Zhu, PhD email: H.Zhu@aston.ac.uk Neural Computing Research Group http://neural-server.aston.ac.uk/People/zhuh Dept of Computer Science ftp://cs.aston.ac.uk/neural/zhuh and Applied Mathematics tel: +44 121 359 3611 x 5427 Aston University, fax: +44 121 333 6215 Birmingham B4 7ET, UK From stavrosz@med.auth.gr Fri May 31 00:29:31 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id AAA00622 for ; Fri, 31 May 1996 00:29:10 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id XAA04178 for ; Thu, 30 May 1996 23:12:13 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa12634; 30 May 96 14:24:20 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12623; 30 May 96 14:00:07 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14281; 30 May 96 13:58:30 EDT Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa01706; 30 May 96 10:16:42 EDT Received: from antigoni.med.auth.gr by EDRC.CMU.EDU id aa26366; 30 May 96 10:15:12 EDT Received: from localhost (antigoni.med.auth.gr) by antigoni.med.auth.gr (5.x/MED-AUTH-1.0) id AA11576; Thu, 30 May 1996 16:55:17 +0300 Date: Thu, 30 May 1996 16:55:16 +0300 Message-Id: <9605301355.AA11576@antigoni.med.auth.gr> Organization: X-Sender: stavrosz@med.auth.gr (Unverified) X-Mailer: Windows Eudora Light Version 1.5.2 Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" To: Connectionists@cs.cmu.edu From: Stavros Zanos Subject: Neuronal Cell Death This recently published (April '96) review about the amount and the possible role of neuronal cell-death during development, could be of an interest to some of the readers of this list. James T. Voyvodic (1996) Cell Death in Cortical Development: How Much? Why? So What? Neuron 16(4) Stavros Zanos Aristotle University School of Medicine Thessaloniki, Macedonia, Greece "If I Had More Time, I Would Have Written You A Shorter Letter" Pascal From pazzani@super-pan.ICS.UCI.EDU Fri May 31 00:43:50 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id AAA02171 for ; Fri, 31 May 1996 00:43:44 -0500 Received: from paris.ics.uci.edu (paris.ics.uci.edu [128.195.1.50]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id AAA04866; Fri, 31 May 1996 00:43:40 -0500 Received: from super-pan.ics.uci.edu by paris.ics.uci.edu id aa16980; 30 May 96 21:11 PDT To: ML-LIST:; Subject: Machine Learning List: Vol. 8, No. 10 Reply-to: ml@ics.uci.edu Date: Thu, 30 May 1996 20:57:50 -0700 From: Michael Pazzani Message-ID: <9605302111.aa16980@paris.ics.uci.edu> Machine Learning List: Vol. 8, No. 10 Thursday, May 30, 1996 Contents: Applications of Machine Learning: Precise Rules are not interesting Student summer job at NEC Research Institute Position Announcement: Postdoctoral Associate in Info Systems IEEE SMC Transactions: Special Issue on Autonomous Learning Robots EPSRC CASE Research Studentship in Intelligent Data Analysis Call for Participation for ISIS CALL for PAPERS call for papers: Ev. Methods for Program Induction Workshop: WHAT IS INDUCTIVE LEARNING? (program) HeKoNN96-CfP The Machine Learning List is moderated. Contributions should be relevant to the scientific study of machine learning. Mail contributions to ml@ics.uci.edu. Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues may be obtained from http://www.ics.uci.edu/AI/ML/Machine-Learning.html ---------------------------------------------------------------------- From: Gregory Piatetsky-Shapiro Subject: Applications of Machine Learning: Precise Rules are not interesting Date: Fri, 24 May 1996 14:06:51 -0400 In reading Katharina Morik's comments in ML list 8:8, Monday April 29 on suitability of ILP methods for KDD I noticed a comment: > >- the learned rules were not interesting; in general, the more accuracy > we achieved, the worse became the interestingness. This has been been our experience for several years with knowledge discovery in real databases -- almost all of the rules that are very strong or close to exact are not interesting, because either 1) they represent common domain knowledge (e.g. Doctor_Type = OB/GYN --> Patient_Sex = Female) 2) they deal with dependencies not interesting to the user (e.g. Doctor_Name = X --> Appointment_Day = Wednesday) 3) they are results of a database redundantly storing related fields (e.g. Code=OB/GYN --> Profession=Obstetrics) Our experience has been that the "interestingness" of a pattern is related to potential usefullness/payoff . Just maximization of accuracy is not sufficient for finding interesting rules. We need to ask a question: Does the new piece of knowledge enable users to make a better decision and what is the payoff from that decision ? A second point is that even when strong rules are not interesting, they may help in finding exception/changes. Suppose you find that 99.5% of doctor X of type OB/GYN patients are female. Now you also found 0.5 percent of data entries that may represent either data errors or some procedures done by the doctor on newborn boys -- in either case probably interesting findings. As for the comment that Foster's paper would definitely get into KDD, but not into IMLC, perhaps this is KDD-96 gain and IMLC loss. Sincerely, Gregory Piatetsky-Shapiro P.S. By the way a paper by Foster Provost and Tom Fawcett was among papers accepted into KDD-96, which had about 210 papers submitted and 38 accepted for presentation. ------------------------------ From: Lee Giles Subject: Student summer job at NEC Research Institute Date: Thu, 30 May 96 16:45:21 EDT SUMMER RESEARCH POSITION Due to an unexpected situation, the NEC Research Institute in Princeton, NJ has an immediate opening for a student summer research position in the area of prediction methods applied to disk file organization. The successful candidate must have experience in research and be able to effectively communicate research results. He or she should have knowledge of prediction and time series modelling (e.g. markov modelling, etc. ) and operating systems (specifically disk file organization schemes). The ideal candidate would understand disk drive geometries and be capable of modifying a Linux kernel. Interested applicants should send their resumes by mail, fax or email to one of the following: Dr. C. Lee Giles Dr. Jim Philbin NEC Research Institute NEC Research Institute 4 Independence Way 4 Independence Way Princeton, NJ 08540 Princeton, NJ 08540 Phone: (609) 951-2642 Phone: (609) 951-2749 FAX: 609-951-2482 FAX: 609-951-2488 giles@research.nj.nec.com philbin@research.nj.nec.com http://www.neci.nj.nec.com Applicants must show DOCUMENTATION OF ELIGIBILITY FOR EMPLOYMENT. NEC is an equal opportunity employer. C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ------------------------------ From: Alberto Maria Segre Subject: Position Announcement: Postdoctoral Associate in Info Systems Date: Mon, 20 May 1996 09:46:05 -0500 Position Announcement Postdoctoral Associate in Information Systems Department of Management Sciences The University of Iowa The Department of Management Sciences at the University of Iowa is soliciting applications for a POSTDOCTORAL ASSOCIATE commencing August 1996 in the general area of Information Systems. The successful applicant will hold a PhD in Information Systems or related area. A strong background in computing is essential, as is demonstrated teaching potential. Postdoctoral associates participate in both the teaching and research activities of the department. Duties for this position include teaching one course per semester in the Information Systems area. Postdoctoral associates are also expected to participate in research in an area congruent with the department's current research focus, including computer networking, database systems, artificial intelligence, decision support and expert systems, performance analysis, and optimal design of computer systems. For additional information regarding departmental research activites, consult our World Wide Web page at "http://www.biz.uiowa.edu/mansci". Applicants should send electronic copies of their curriculum vitae (including names and electronic addresses of at least three references) to: Ms. Judy Putney Department of Management Sciences C108 Pappajohn Building The University of Iowa Iowa City, IA 52242-1000 jputney@scout-po.biz.uiowa.edu Alternatively, hardcopy applications are also being accepted. The University of Iowa offers a competitive salary and benefits package: to ensure full consideration, applications should be received by June 1, 1996. The University of Iowa is an affirmative action/equal opportunity employer. Women, minorities and individuals with disabilities are encouraged to apply. ------------------------------ From: Marco DORIGO Subject: IEEE SMC Transactions: Special Issue on Autonomous Learning Robots Date: Sat, 25 May 1996 10:10:24 +0200 The Special Issue on Learning Autonomous Robots I guest edited for the IEEE Transactions on Systems, Man, and Cybernetics is finally available: IEEE Transactions on Systems, Man, and Cybernetics-Part B, Vol.26, No.3, June 1996. I prepared a Mosaic (WWW) page containing the Editorial (in HTML format) and abstracts of the published papers. The page URL is: http://iridia.ulb.ac.be/dorigo/SI/Special_Issue.html Marco Dorigo, TSMC Guest Editor ------------------------------ From: Xiaohui Liu Subject: EPSRC CASE Research Studentship in Intelligent Data Analysis Date: Wed, 15 May 96 23:09:05 BST BIRKBECK COLLEGE DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF LONDON EPSRC CASE Research Studentship in Intelligent Data Analysis Applications are invited for an EPSRC CASE PhD studentship in Intelligent Data Analysis (IDA) tenable at the Department of Computer Science, Birkbeck College. The successful candidate will be expected to work on a joint research project with Moorfields Eye Hospital in London, in collaboration with the Institute of Ophthalmology. The project "Improving Glaucoma Service by Intelligent Data Analysis" is to start from 1 October 1996 for three years and is also funded by a research grant from the Moorfields Eye Hospital. The IDA Group at Birkbeck specialises in the application of computationally intelligent techniques to data analysis problems. The group has been working with several external organisations in medicine and industry on a variety of IDA research projects, funded by government agencies, industrial sponsorships and charity organisations. This current project follows on from a highly successful three-year collaboration with the above-mentioned two organisations on a project entitled "METRO: Making Eye Test Reliable for Ophthalmologists". Exciting research results have been generated and software developed within the project has been effectively used in a large-scale field investigation and is being further tested in general practitioners' clinics. Applicants should have or expect to gain at least a 2(i) in Computer Science, or an equivalent MSc, and have a good background in Artificial Intelligence, Databases, or Statistics. Please submit a CV as soon as possible, but not later than 15 June 1995, to Dr X Liu, Department of Computer Science, Birkbeck College, Malet Street, London WC1E 7HX, UK. Telephone Dr Liu on (+44) 171-631 6711 or email him (hui@dcs.bbk.ac.uk) if you wish to make an informal enquiry. Information regarding this project and research activities of the IDA Group at Birkbeck can be accessed on the World Wide Web via URL: http://web.dcs.bbk.ac.uk/~hui/IDA/home.html ------------------------------ From: ISIS conference Subject: Call for Participation for ISIS Date: Wed, 15 May 1996 19:48:48 +1000 ISIS CONFERENCE: INFORMATION, STATISTICS AND INDUCTION IN SCIENCE *** Call for Participation *** Old Melbourne Hotel Melbourne, Australia 20-23 August 1996 INVITED SPEAKERS: Henry Kyburg, Jr. (University of Rochester, NY) Marvin Minsky (MIT) J. Ross Quinlan (Sydney University) Jorma J. Rissanen (IBM Almaden Research, San Jose, California) Ray Solomonoff (Oxbridge Research, Mass) This conference will explore the use of computational modeling to understand and emulate inductive processes in science. The problems involved in building and using such computer models reflect methodological and foundational concerns common to a variety of academic disciplines, especially statistics, artificial intelligence (AI) and the philosophy of science. This conference aims to bring together researchers from these and related fields to present new computational technologies for supporting or analysing scientific inference and to engage in collegial debate over the merits and difficulties underlying the various approaches to automating inductive and statistical inference. About the invited speakers: Henry Kyburg is noted for his invention of the lottery paradox (in "Probability and the Logic of Rational Belief", 1961) and his research since then in providing a non-Bayesian foundation for a probabilistic epistemology. Marvin Minsky is one of the founders of the field of artificial intelligence. He is the inventor of the use of frames in knowledge representation, stimulus for much of the concern with nonmonotonic reasoning in AI, noted debunker of Perceptrons and recently the developer of the "society of minds" approach to cognitive science. J. Ross Quinlan is the inventor of the information-theoretic approach to classification learning in ID3 and C4.5, which have become world-wide standards in testing machine learning algorithms. Jorma J. Rissanen invented the Minimum Description Length (MDL) method of inference in 1978, which has subsequently been widely adopted in algorithms supporting machine learning. Ray Solomonoff developed the notion of algorithmic complexity in 1960, and his work was influential in shaping the Minimum Message Length (MML) work of Chris Wallace (1968) and the Minimum Description Length (MDL) work of Jorma Rissanen (1978). ========================= Tutorials (Tue 20 Aug 96) ========================= 10am - 1pm: Tutorial 1: Peter Spirtes "Automated Learning of Bayesian Networks" Tutorial 2: Michael Pazzani "Machine Learning and Intelligent Info Access" 2pm - 5pm: Tutorial 3: Jan Zytkow "Automation of Scientific Discovery" Tutorial 4: Paul Vitanyi "Kolmogorov Complexity & Applications" About the tutorial leaders: Peter Spirtes is a co-author of the TETRAD algorithm for the induction of causal models from sample data and is an active member of the research group on causality and induction at Carnegie Mellon University. Mike Pazzani is one of the leading researchers world-wide in machine learning. Current interests include the use of intelligent agents to support information filtering over the Internet. Jan Zytkow is one of the co-authors (with Simon, Langley and Bradshaw) of "Scientific Discovery" (1987), reporting on the series of BACON programs for automating the learning of quantitative scientific laws. Paul Vitanyi is co-author (with Ming Li) of "An Introduction to Kolmogorov Complexity and its Applications (1993) and of much related work on complexity and information-theoretic methods of induction. Professor Vitanyi will be visiting the Department of Computer Science, Monash, for several weeks after the conference. A limited number of free student conference registrations or tutorial registrations will be available by application to the organizers in exchange for part-time work during the conference. Program Committee: Hirotugu Akaike, Lloyd Allison, Shun-Ichi Amari, Mark Bedau, Jim Bezdek, Hamparsum Bozdogan, Wray Buntine, Peter Cheeseman, Honghua Dai, David Dowe, Usama Fayyad, Doug Fisher, Alex Gammerman, Clark Glymour, Randy Goebel, Josef Gruska, David Hand, Bill Harper, David Heckerman, Colin Howson, Lawrence Hunter, Frank Jackson, Max King, Kevin Korb, Henry Kyburg, Rick Lathrop, Ming Li, Nozomu Matsubara, Aleksandar Milosavljevic, Richard Neapolitan, Jon Oliver, Michael Pazzani, J. Ross Quinlan, Glenn Shafer, Peter Slezak, Padhraic Smyth, Ray Solomonoff, Paul Thagard, Neil Thomason, Raul Valdes-Perez, Tim van Gelder, Paul Vitanyi, Chris Wallace, Geoff Webb, Xindong Wu, Jan Zytkow. Inquiries to: isis96@cs.monash.edu.au David Dowe (chair): dld@cs.monash.edu.au Kevin Korb (co-chair): korb@cs.monash.edu.au or Jonathan Oliver (co-chair): jono@cs.monash.edu.au Detailed up-to-date information, including registration costs and further details of speakers, their talks and the tutorials is available on the WWW at: http://www.cs.monash.edu.au/~jono/ISIS/ISIS.shtml - David Dowe, Kevin Korb and Jon Oliver. ======================================================================= ------------------------------ From: Maja Mataric Subject: CALL for PAPERS Date: Sun, 19 May 1996 18:56:40 -0400 CALL FOR PAPERS (http://www.cs.brandeis.edu:80/~maja/abj-special-issue/) ADAPTIVE BEHAVIOR Journal Special Issue on COMPLETE AGENT LEARNING IN COMPLEX ENVIRONMENTS Guest editor: Maja J Mataric Submission Deadline: June 1, 1996. Adaptive Behavior is an international journal published by MIT Press; Editor-in-Chief: Jean-Arcady Meyer, Ecole Normale Superieure, Paris. In the last decade, the problems being treated in AI, Alife, and Robotics have witnessed an increase in complexity as the domains under investigation have transitioned from theoretically clean scenarios to more complex dynamic environments. Agents that must adapt in environments such as the physical world, an active ecology or economy, and the World Wide Web, challenge traditional assumptions and approaches to learning. As a consequence, novel methods for automated adaptation, action selection, and new behavior acquisition have become the focus of much research in the field. This special issue of Adaptive Behavior will focus on situated agent learning in challenging environments that feature noise, uncertainty, and complex dynamics. We are soliciting papers describing finished work on autonomous learning and adaptation during the lifetime of a complete agent situated in a dynamic environment. We encourage submissions that address several of the following topics within a whole agent learning system: * learning from ambiguous perceptual inputs * learning with noisy/uncertain action/motor outputs * learning from sparse, irregular, inconsistent, and noisy reinforcement/feedback * learning in real time * combining built-in and learned knowledge * learning in complex environments requiring generalization in state representation * learning from incremental and delayed feedback * learning in smoothly or discontinuously changing environments We invite submissions from all areas in AI, Alife, and Robotics that treat either complete synthetic systems or models of biological adaptive systems situated in complex environments. Submitted papers should be delivered by June 1, 1996. Authors intending to submit a manuscript should contact the guest editor to discuss paper suitability for this issue. Use maja@cs.brandeis.edu or tel: (617) 736-2708 or fax: (617) 736-2741. Manuscripts should be typed or laser-printed in English (with American spelling preferred) and double-spaced. Both paper and electronic submission are possible, as described below. Copies of the complete Adaptive Behavior Instructions to Contributors are available on request--also see the Adaptive Behavior journal's home page at: http://www.ens.fr:80/bioinfo/www/francais/AB.html. For paper submissions, send five (5) copies of submitted papers (hard-copy only) to: Maja Mataric Volen Center for Complex Systems Computer Science Department Brandeis University Waltham, MA 02254-9110, USA For electronic submissions, use Postscript format, ftp the file to ftp.cs.brandeis.edu/incoming, and send an email notification to maja@cs.brandeis.edu. For a Web page of this call, and detailed ftp directions, see: http://www.cs.brandeis.edu/~maja/abj-special-issue/ ------------------------------ From: Una-May O'Reilly Subject: call for papers: Ev. Methods for Program Induction Date: Wed, 29 May 1996 22:15:51 -0400 (EDT) Call for Papers EVOLUTIONARY COMPUTATION Special Issue on Evolutionary Methods for Program Induction Guest editors: Peter Angeline (Lockheed Martin Federal Systems) Una-May O'Reilly (Artificial Intelligence Lab, MIT) A forthcoming issue of Evolutionary Computation will be devoted to the presentation of original research involving evolutionary methods for program induction. The notion of harnessing evolution for the creation of executable programs has a long history, although it has only recently become a well studied topic. In 1950, Turing envisioned the possibility of evolving programs. Friedberg (1958) and Friedberg, Dunham and North (1959), working within the constraints of simple computers, evolved sequences of machine language instructions that performed modest computations. Fogel, Owens, and Walsh (1966) introduced evolutionary programming as a method to evolve behaviors using a finite state machine representation. Holland (1975) suggested that a genetic algorithm could be used with an encoding that represented a computer program. Innovative implementations such as those by Smith (1980), Hicklin (1986), and Fujicki (1986) followed. Much of the current work on evolving executable structures follows the work of Koza (1992) under the name of genetic programming. By using a more LISP-like representation, Koza extended the work of Cramer (1985) which demonstrated that parse trees could provide a natural representation for evolving programs. Koza (1992) applied this technique to a broad range of problems. More recently, genetic programming researchers have explored a diverse collection of topics related to program induction including modular representations, theoretical analysis, genetic operator design, parallel and distributed algorithms, and the use of program memory to represent intermediate states of the problems solving process. This notice solicits papers which relate original and innovative research concerning evolution and adaptation as computational paradigms for discovering, manipulating, or optimizing all forms of programs and other executable structures. Relevant topics include but are not restricted to: o Induction of modular or hierarchical programs o Morphogenic approaches to program induction o Program induction with adaptive and self-adaptive evolutionary computations o Encoding program semantics in evolvable structures (e.g. iteration, recursion, special purpose functionality) o Efficient evolutionary operators for program induction o Evolving object oriented, purely functional, declarative or alternative executable structures o Comparison of evolutionary computations with other forms of program induction and optimization o Control and analysis of program growth and development o Improving the readability and comprehension of evolved programs o The evolution of intelligent software agents o Scientific, practical or real-world applications of evolved executable structures Manuscripts should be approximately 8,000 to 12,000 words in length and formatted for 8.5 by 11 inch paper, single-sided and double-spaced. The first page should include the title, abstract, key words and author information. Authors should submit five copies of their manuscript to the following address no later than Sept 6th, 1996. Una-May O'Reilly (unamay@ai.mit.edu) Rm 812, NE-43 MIT Artificial Intelligence Lab, 545 Technology Square, Cambridge, MA, 02139 USA Important milestones for this special issue are as follows: Sept 6, 1996 Submission deadline December 4, 1996 Notification of acceptance December 23, 1996 Revised versions due to editors January 21, 1997 Finished articles to MIT Press This call for papers is also available from http://www.ai.mit.edu/people/unamay/ec-cfp.html References Cramer, Nichael Lynn, A Representation for the Adaptive Generation of Simple Sequential Programs, Grefenstette: Proceedings of First International Conference on Genetic Algorithms, 1985. Fogel, Lawrence J., Owens, Alvin J., and Walsh, Michael. J. 1966. Artificial Intelligence through Simulated Evolution. New York: John Wiley. Friedberg, R. M. l958. A learning machine: Part I. IBM Journal of Research and Development, 2(1) 2-13, Friedberg, R. M. Dunham, B., and North, J. H. l959. A learning machine: Part II. IBM Journal of Research and Development, 3(3) 282-287. Fujiki, Cory. l986. An Evaluation of Holland's Genetic Algorithm Applied to a Program Generator. M.S. thesis, Department of Computer Science, Moscow, ID: University of Idaho. Hicklin, Joseph F. l986. Application of the Genetic Algorithm to Automatic Program Generation. M. S. thesis, Department of Computer Science. Moscow, ID: University of Idaho. Holland, John H. Adaptation in Natural and Artificial Systems; The University of Michigan Press, Ann Arbor, 1975. Koza, John R. 1989. Hierarchical genetic algorithms operating on populations of computer programs. In Proc. of the 11th Int'l Joint Conf on Artificial Intelligence. San Mateo, CA: Morgan Kaufmann. Volume I. Pages 768-774. Koza, John R. 1992. Genetic Programming: on the programming of computers by means of natural selection. MIT Press, Cambridge, MA, 1992. S.F. Smith, A Learning System Based on Genetic Adaptive Algorithms, Ph.D. Thesis, Computer Science Dept., University of Pittsburgh, Dec. 1980. Turing, Alan M. 1950. Computing Machinery and Intelligence. Mind, LIX, 2236, pages 433-460. ------------------------------ From: Lev Goldfarb Subject: Workshop: WHAT IS INDUCTIVE LEARNING? (program) Date: Tue, 14 May 1996 14:06:41 -0300 (ADT) WHAT IS INDUCTIVE LEARNING? On the foundations of AI and Cognitive Science May 20-21, 1996 held in conjunction with the 11th biennial Canadian AI conference, at the Holiday Inn on King, in Toronto, Canada. Workshop Chair: Lev Goldfarb Each talk (except opening remarks) is 30 min. followed by 30 min. question/discussion period. Monday, May 20, Morning Session 8:45-9:00 Lev Goldfarb, University of New Brunswick, Canada "Opening Remarks: The inductive learning process as the central cognitive process" 9:00 Chris Thornton, University of Sussex, UK "Does Induction always lead to representation?" 10:10 Lev Goldfarb, University of New Brunswick, Canada "What is inductive learning? Construction of the inductive class representation" 11:20 Anselm Blumer, Tufts University, USA (invited talk) "PAC learning and the Vapnik-Chervonenkis dimension" Monday, May 20, Afternoon Session 2:00 Charles Ling, University of Western Ontario, Canada (invited talk) "Symbolic and neural network learning in cognitive modeling: Where's the beef?" 3:10 Eduardo Perez, Ricardo Vilalta and Larry Rendell, University of Illinois, USA (invited talk) "On the importance of change of representation in induction" 4:20 Sayan Bhattacharyya and John Laird, University of Michigan, USA "A cognitive model of recall motivated by inductive learning" Tuesday, May 21, Morning Session 9:00 Ryszard Michalski, George Mason University, USA (invited talk) "Inductive inference from the viewpoint of inferential theory of learning" 10:10 Lev Goldfarb, Sanjay Deshpande and Virendra Bhavsar, University of New Brunswick, Canada "Inductive theory of vision" 11:20 David Gadishev and David Chiu, University of Guelph, Canada "Learning basic elements for texture representation and comparison" Tuesday, May 21, Afternoon Session 2:00 John Caulfield, Center of Applied Optics, A&M University, USA (invited talk) "Induction and Physics" 3:10 Igor Jurisica, University of Toronto, Canada "Inductive learning and case-based reasoning" 4:20 Concluding discussion: What is inductive learning? ************************************************************************ URL for Canadian AI'96 Conference http://ai.iit.nrc.ca/cscsi/conferences/ai96.html ------------------------------ From: Knut Moeller Subject: HeKoNN96-CfP Date: Wed, 15 May 1996 14:13:54 +0200 (MET DST) CALL FOR PARTICIPATION = = = H e K o N N 9 6 = = = Autumn School in C o n n e c t i o n i s m and N e u r a l N e t w o r k s October 2-6, 1996 Muenster, Germany Conference Language: German A comprehensive description of the Autumn School together with abstracts of the courses can be found at the following address: WWW: http://set.gmd.de/AS/fg1.1.2/hekonn ------------------------------ End of ML-LIST (Digest format) **************************************** From pierre@mbfys.kun.nl Fri May 31 10:14:32 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA11287 for ; Fri, 31 May 1996 10:13:33 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id KAA09612 for ; Fri, 31 May 1996 10:13:31 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa12621; 30 May 96 14:12:48 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12619; 30 May 96 13:58:11 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14273; 30 May 96 13:57:47 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id ai27041; 30 May 96 5:15:00 EDT Received: from septimius.mbfys.kun.nl by CS.CMU.EDU id aa08041; 30 May 96 4:47:17 EDT Received: from anthemius by septimius.mbfys.kun.nl via anthemius.mbfys.kun.nl [131.174.173.158] with SMTP id KAA28507 (8.6.10/2.4); Thu, 30 May 1996 10:45:22 +0200 Sender: pierre@mbfys.kun.nl Message-ID: <31AD5FDC.794BDF32@mbfys.kun.nl> Date: Thu, 30 May 1996 10:44:12 +0200 From: "Pi\\\"erre van de Laar" Organization: KUN X-Mailer: Mozilla 2.01 (X11; I; SunOS 4.1.3_U1 sun4m) MIME-Version: 1.0 To: connectionists@cs.cmu.edu CC: pierre@mbfys.kun.nl Subject: sensitivity analysis and relevance Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Dear Connectionists, I am interested in methods which perform sensitivity analysis and/or relevance determination of input fields, and especially methods which use neural networks. Although I have already found a number of references (see end of mail), I expect that this list is not complete. Any further references would be highly appreciated. As usual a summary of all replies will be posted in about a month. Thanks in advance, Pi\"erre van de Laar Department of Medical Physics and Biophysics University of Nijmegen, The Netherlands mailto:pierre@mbfys.kun.nl http://www.mbfys.kun.nl/~pierre/ Aldrich, C. and van Deventer, J.S.J., Modelling of Induced Aeration in Turbine Aerators by Use of Radial Basis Function Neural Networks, The Canadian Journal of Chemical Engineerin g 73(6):808-816, 1995. Boritz, J. Efrim and Kennedy, Duane B., Effectiveness of Neural Network Types for Predicti on of Business Failure, Expert Systems With Applications, 9(4):503-512, 1995. Hammitt, A.M. and Bartlett, E.B., Determining Functional Relationships from trained neural networks, Mathematical and Computer Modelling 22(3):83-103, 1995. Korthals, R.L. and Hahn, G.L. and Nienaber, J.A., Evaluation of Neural Networks as a tool for management of swine environments, Transactions of the American Society of Agricultural Engineers 37(4):1295-1299, 1994. Lacroix, R. and Wade, K.M. and Kok, R. and Hayes, J.F., Prediction of cow performance with a connectionist model, Transactions of the American Society of Agricultural Engineers 38(5):1573-1579, 1995. MacKay, David J.C., Probable networks and plausible predictions - a review of pratical Bay esian methods for supervised neural networks, Network: Computation in Neural Systems 6(3): 469-505, 1995. Neal, Radford M., Bayesian Learning for neural networks, Dept. of Computer Science, Univer sity of Toronto, 1994. Oh, Sang-Hoon and Lee, Youngjik, Sensitivity Analysis of Single Hidden-Layer Neural Networ ks with Threshold Functions, IEEE Transactions on Neural Networks 6(4):1005-1007, 1995. Naimimohasses, R. and Barnett, D.M. and Green, D.A. and Smith, P.R., Sensor optimization u sing neural network sensitivity measures, Measurement science & technology 6(9):1291-1300, 1995. BrainMaker Professional: User's Guide and Reference Manual, 4th edition, California Scient ific Software, Nevada City, Chapter 10, 48-59, 1993. From fisher@tweed.cse.ogi.edu Fri May 31 10:14:35 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA11290 for ; Fri, 31 May 1996 10:13:36 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id KAA09614 for ; Fri, 31 May 1996 10:13:33 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa13166; 31 May 96 0:01:11 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa13149; 30 May 96 23:42:47 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14673; 30 May 96 23:42:17 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa09444; 30 May 96 17:52:09 EDT Received: from cse.ogi.edu by CS.CMU.EDU id aa13655; 30 May 96 17:51:42 EDT Received: from [129.95.40.120] by church.cse.ogi.edu with smtp (Smail3.1.29.1 #4) id m0uPFc4-000KpUC; Thu, 30 May 96 14:50 PDT Message-Id: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Date: Thu, 30 May 1996 14:50:46 +0100 To: connectionists@cs.cmu.edu, ml@ics.uci.edu, Reinforce@cs.uwa.edu.au, gann-list@cs.iastate.edu, corryfee%hasara11.BITNET@cunyvm.cuny.edu, owner-csemlist@mundo.eco.utexas.edu, list.nih.gov@DST.BOLTZ.CS.CMU.EDU, comp-finance@teleport.com MMDF-Warning: Parse error in original version of preceding line at DST.BOLTZ.CS.CMU.EDU From: Therese Fisher Subject: New Computational Finance Program Computational Finance at Oregon Graduate of Institute of Science & Technology (OGI) A Concentration in the MS Programs of Computer Science & Engineering (CSE) Electrical Engineering & Applied Physics (EEAP) ---------------------------------------------------------------------------- 20000 NW Walker Road, PO Box 91000, Portland, OR 97291-1000 ---------------------------------------------------------------------------- Computational Finance at OGI is a 12-month intensive program leading to a Master of Science degree in Computer Science and Engineering (CSE track) or in Electrical Engineering & Applied Physics (EE track). The program features: * A 12 month intensive program to train scientists and engineers for doing state-of-the-art quantitative or information systems work in finance. * Provide an attractive alternative to the standard 2 year MBA for technically-sophisticated students. * Provide a solid foundation in finance. Cover three semesters of MBA level finance in three quarters, and go beyond that. * Provide a solid foundation in relevant techniques from CS and EE for modeling financial markets and developing investment analysis, trading, and risk management systems. * Give CS/EE graduates the necessary finance background to work as information system specialists in major financial firms. * Emphasize state-of-the-art techniques in neural networks, adaptive systems, signal processing, and data modeling. * Provide state-of-the-art computing facilities for doing course assignments using live and historical market data provided by Dow Jones Telerate. * Provide students an opportunity to do significant projects using extensive market data resources and state-of-the-art analysis packages, thereby making them more attractive to employers. * Through their course work and projects, students will develop significant expertise in using and programming important analysis packages, such as Mathematica, Matlab, SPlus, and Expo. ---------------------------------------------------------------------------- Major Components of Program: The curriculum includes 4 quarters with courses structured within the standard CSE/EEAP framework, with 5 courses in the finance specialty area, 7 or 8 core courses within the CSE or EEAP departments, and 3 electives. Students will enroll in either the CSE (CSE track) or EEAP (EE track) MS programs. ---------------------------------------------------------------------------- Admission Requirements & Contact Information ---------------------------------------------------------------------------- Admission requirements are the same as the general requirements of the institution. GRE scores are required for the 12-month concentration in Computational Finance, however they may be waived in special circumstances. A candidate must hold a bachelor's degree in computer science, engineering, mathematics, statistics, one of the biological or physical sciences, finance, or one of the quantitative social sciences. For more information, contact Computational Finance Betty Shannon, Academic Coordinator Computer Science and Engineering Department Oregon Graduate Institute of Science and Technology P.O.Box 91000 Portland, OR 97291-1000 E-mail: academic@cse.ogi.edu Phone: (503) 690-1255 or E-mail: CompFin@cse.ogi.edu WWW: http://www.cse.ogi.edu/CompFin/ From leo@stat.Berkeley.EDU Fri May 31 10:14:40 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA11296 for ; Fri, 31 May 1996 10:13:45 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id KAA09620 for ; Fri, 31 May 1996 10:13:42 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa13225; 31 May 96 0:49:59 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa13223; 31 May 96 0:41:02 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14784; 31 May 96 0:40:21 EDT Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa02977; 30 May 96 11:34:44 EDT Received: from saruman.Berkeley.EDU by EDRC.CMU.EDU id aa26734; 30 May 96 11:34:24 EDT Received: from [128.32.135.134] (scf-sl6.Berkeley.EDU [128.32.135.134]) by saruman.Berkeley.EDU (8.6.10/8.6.10) with SMTP id IAA03578 for ; Thu, 30 May 1996 08:33:46 -0700 Date: Thu, 30 May 1996 08:33:46 -0700 Message-Id: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="============_-1378680604==_============" X-mailer: Eudora Pro 2.1.3 To: connectionists@cs.cmu.edu From: Leo Breiman Subject: paper available: Bias, Variance and Arcing Classifiers This paper is available at the ftp machine ftp.stat.berkeley.edu under users/breiman/arcall.ps.Z BIAS, VARIANCE , AND ARCING CLASSIFIERS Leo Breiman* Statistics Department University of California Berkeley, CA 94720 leo@stat.berkeley.edu ABSTRACT Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. To study this, the concepts of bias and variance of a classifier are defined. Unstable classifiers can have universally low bias. Their problem is high variance. Combining multiple versions is a variance reducing device. One of the most effective is bagging (Breiman [1996a] ) Here, modified training sets are formed by resampling from the original training set, classifiers constructed using these training sets and then combined by voting . Freund and Schapire [1995,1996] propose an algorithm the basis of which is to adaptively resample and combine (hence the acronym--arcing) so that the weights in the resampling are increased for those cases most often missclassified and the combining is done by weighted voting. Arcing is more successful than bagging in variance reduction. We explore two arcing algorithms, compare them to each other and to bagging, and try to understand how arcing works. From ATAXR@asuvm.inre.asu.edu Fri May 31 10:14:43 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA11292 for ; Fri, 31 May 1996 10:13:40 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id KAA09616 for ; Fri, 31 May 1996 10:13:35 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab13166; 31 May 96 0:02:27 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id ab13149; 30 May 96 23:42:51 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14677; 30 May 96 23:42:36 EDT Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa10273; 30 May 96 18:56:09 EDT Received: from post2.INRE.ASU.EDU by CS.CMU.EDU id aa14017; 30 May 96 18:53:53 EDT Received: from ASUVM.INRE.ASU.EDU (MAILER@ASUACAD) by asu.edu (PMDF V4.3-10 #7723) id <01I5BM49U98W8Y8W5E@asu.edu>; Thu, 30 May 1996 15:52:37 -0700 (MST) Received: from ASUACAD (NJE origin ATAXR@ASUACAD) by ASUVM.INRE.ASU.EDU (LMail V1.2a/1.8a) with BSMTP id 0587; Thu, 30 May 1996 15:52:05 -0700 Date: Thu, 30 May 1996 15:44:10 -0700 (MST) From: Asim Roy Subject: Connectionist Learning - Some New Ideas/Questions To: connectionists@cs.cmu.edu Message-id: <01I5BM4AALLU8Y8W5E@asu.edu> Content-transfer-encoding: 7BIT (This is for posting to your mailing list.) This is an attempt to respond to some thoughts on one particular aspect of our learning theory - the one that requires connectionist/neural net algorithms to make an explicit "attempt" to build the smallest possible net (generalize, that is). One school of thought says that we should not attempt to build the smallest possible net because some extra neurons in the net (and their extra connections) provide the benefits of fault tolerance and reliability. And since the brain has access to billions of neurons, it does not really need to worry about a real resource constraint - it is practically an unlimited resource. (It is a fact of life, however, that at some age we do have difficulty memorizing and remembering things and learning- we perhaps run out of space (neurons) like a storage device on a computer. Even though billions of neurons is a large number, we must be using most of it at some age. So it is indeed a finite resource and some of it appears to be reused, like we reuse space on our storage devices. For memorization, for example, it is possible that the brain selectively erases some old memories to store some new ones. So a finite capacity system is a sensible view of the brain.) Another argument in favor of not trying to generalize is that by not worrying about attempting to create the smallest possible net, the connectionist algorithms are easier to develop and less complex. I hope researchers will come forward with other arguments in favor of not attempting to create the smallest possible net or to generalize. There is one main problem with the argument that adding lots of extra neurons to a net buys reliability and fault tolerance. First, we run the severe risk of "learning nothing" if we don't attempt to generalize. With lots of neurons available to a net, we would simply overfit the net to the problem data. (Try it next time on your back prop net. Add 10 or 100 times the number of hidden nodes you need and observe the results.) That is all we would achieve. Without good generalization, we may have a fault tolerant and reliable net, but it may be "useless" for all practical purposes because it may have "learnt nothing". Generalization is the fundamental part of learning - it perhaps should be the first learning criteria for our algorithms. We can't overlook or skip that part. If an algorithm doesn't attempt to generalize, it doesn't attempt to learn. It is as simple as that. So generalization needs to be our first priority and fault tolerance comes later. First we must "learn" something, then make it fault tolerant and reliable. Here is a practical viewpoint for our algorithms. Even though neurons are almost "unlimited" and free of cost to our brain, from a practical engineering stand point, "silicon" neurons are not so cheap. So our algorithms definitely need to be cost conscious and try to build the smallest possible net; they cannot be wasteful in their use of expensive "silicon" neurons. Once we obtain good generalization on a problem, fault tolerance can be achieved in many other ways. It would not hurt to examine the well established theory of reliability for some neat ideas. A few backup systems might be a more cost effective way to buy reliability than throwing in lots of extra silicon in a single system which may buy us nothing (it "learns nothing"). From controlling nuclear power plants with backup computer systems to adding extra tires in our trucks and buses, the backup idea works quite well. It is possible that "backup" is also what is used in our brains. We need to find out. "Redundancy" may be in the form of backup systems. "Repair" is another good idea used in our everyday lives for not so critical systems. Is fault tolerance and reliability sometimes achieved in the brain through the process of "repair"? Patients do recover memory and other brain functions after a stroke. Is that repair work by the biological system? It is a fact that biological systems are good at repairing things (look at simple things like cuts and bruises). We perhaps need to look closer at our biological systems and facts and get real good clues to how it works. Let us not jump to conclusions so quickly. Let us argue and debate with our facts. We will do our science a good service and be able to make real progress. I would welcome more thoughts and debate on this issue. I have included all of the previous responses on this particular issue for easy reference by the readers. I have also appended our earlier note on our learning theory. Perhaps more researchers will come forward with facts and ideas and enlighten all of us on this crucial question. ******************************************** On May 16 Kevin Cherkauer wrote: "In a recent thought-provoking posting to the connectionist list, Asim Roy said: >E. Generalization in Learning: The method must be able to >generalize reasonably well so that only a small amount of network >resources is used. That is, it must try to design the smallest possible >net, although it might not be able to do so every time. This must be >an explicit part of the algorithm. This property is based on the >notion that the brain could not be wasteful of its limited resources, >so it must be trying to design the smallest possible net for every >task. I disagree with this point. According to Hertz, Krogh, and Palmer (1991, p. 2), the human brain contains about 10^11 neurons. (They also state on p. 3 that "the axon of a typical neuron makes a few thousand synapses with other neurons," so we're looking at on the order of 10^14 "connections" in the brain.) Note that a period of 100 years contains only about 3x10^9 seconds. Thus, if you lived 100 years and learned continuously at a constant rate every second of your life, your brain would be at liberty to "use up" the capacity of about 30 neurons (and 30,000 connections) per second. I would guess this is a very conservative bound, because most of us probably spend quite a bit of time where we aren't learning at such a furious rate. But even using this conservative bound, I calculate that I'm allowed to use up about 2.7x10^6 neurons (and 2.7x10^9 connections) today. I'll try not to spend them all in one place. :-) Dr. Roy's suggestion that the brain must try "to design the smallest possible net for every task" because "the brain could not be wasteful of its limited resources" is unlikely, in my opinion. It seems to me that the brain has rather an abundance of neurons. On the other hand, finding optimal solutions to many interesting "real-world" problems is often very hard computationally. I am not a complexity theorist, but I will hazard to suggest that a constraint on neural systems to be optimal or near-optimal in their space usage is probably both impossible to realize and, in fact, unnecessary. Wild speculation: the brain may have so many neurons precisely so that it can afford to be suboptimal in its storage usage in order to avoid computational time intractability. References Hertz, J.; Krogh, A.; & Palmer, R.G. 1991. Introduction to the Theory of Neural Computation. Redwood City, CA:Addison-Wesley." ************************************************** On May 15 Richard Kenyon wrote on the subject of generalization: " The brain probably accepts some form of redundancy (waste). I agree that the brain is one hell of an optimisation machine. Intelligence whatever task it may be applied to is (again imho) one long optimisation process. Generalisation arises (even emerges or is a side effect) as a result of ongoing optimisation, conglomeration, reprocessing etc etc. This is again very important i agree, but i think (i do anyway) we in NN commumnity are aware of this as with much of the above. I thought that apart from point A we were doing all of this already, although to have it explicitly published is very valuable." ***************************************** On May 16 Lokendra Shastri replied to Kevin Cherkauer: "There is another way to look at the numbers. The retina provides 10^6 inputs to the brain every 200 msec! A simple n^2 algorithm to process this input would require more neurons than we have in our brain. We can understand (or at least process) a potentially unbounded number of sentences --- Here is one "the grandcanyon walked past the banana" I could have said anyone of a gazzilion sentences at this point and you would have probably understood it. Even if we just count the overt symbolic knowledge, we carry in our heads, we can enumerate about a million items. A coding scheme that consumed a 1000 neurons per item (which is not much) would soon run out neurons. Remember that a large fraction of our neurons are already taken up by sensorimotor processes (vision itself consumes a fair fraction of the brain).For an argument on the tight constraints posed by the "limited" number of neurons vis-a-vis common sense knowledge, you may want to see: ``From simple associations to systematic reasoning'', L. Shastri and V. Ajjanagadde. In Behavioral and Brain Sciences Vol. 16, No. 3, 417--494, 1993. My home page has a URL to a postscript version. There was also a nice paper by Tsotsos in Behavioral and Brains Sciences on this topic from the perspective of Visual Processing. Also you might want to see Feldman and Ballard 1982 paper in Cognitive Science." *********************************************** On May 17 Steven Small replied to Keven Cherkauer: "I agree with this general idea, although I'm not sure that "computational time intractability" is necessarily the principal reason. There are a lot of good reasons for redundancy, overlap, and space "suboptimality", not the least of which is the marvellous ability at recovery that the brain manifests after both small injuries and larger ones that give pause even to experienced neurologists." ************************************************* On May 17 Jonathan Stein replied to Steven Small and Kevin Cherkauer: "One needn't draw upon injuries to prove the point. One loses about 100,000 cortical neurons a day (about a percent of the original number every three years) under normal conditions. This loss is apparently not significant for brain function. This has been often called the strongest argument for distributed processing in the brain. Compare this ability with the fact that single conductor disconnection cause total system failure with high probability in conventional computers. Although certainly acknowledged by the pioneers of artificial neural network techniques, very few networks designed and trained by present techniques are anywhere near that robust. Studies carried out on the Hopfield model of associative memory DO show graceful degradation of memory capacity with synapse dilution under certain conditions (see eg. DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage") but requires retraining of the network." ****************************************** On May 18 Raj Rao replied to Kevin Cherkauer and Steven Small: " Does anyone have a concrete citation (a journal article) for this or any other similar estimate regarding the daily cell death rate in the cortex of a normal brain? I've read such numbers in a number of connectionist papers but none cite any neurophysiological studies that substantiate these numbers." ******************************************** On May 19 Richard Long wrote: "There may be another reason for the brain to construct networks that are 'minimal' having to do with Chaitin and Kolmogorov computational complexity. If a minimal network corresponds to a 'minimal algorithm' for implementing a particular computation, then that particular network must utilize all of the symmetries and regularities contained in the problem, or else these symmetries could be used to reduce the network further. Chaitin has shown that no algorithm for finding this minimal algorithm in the general case is possible. However, if an evolutionary programming method is used in which the fitness function is both 'solves the problem' and 'smallest size' (i.e. Occam's razor), then it is possible that the symmetries and regularities in the problem would be extracted as smaller and smaller networks are found. I would argue that such networks would compute the solution less by rote or brute force, and more from a deep understanding of the problem. I would like to hear anyone else's thoughts on this." ************************************************** On May 20 Juergen Schmidhuber replies to Richard Long: "Apparently, Kolmogorov was the first to show the impossibility of finding the minimal algorithm in the general case (but Solomonoff also mentions it in his early work). The reason is the halting problem, of course - you don't know the runtime of the minimal algorithm. For all practical applications, runtime has to be taken into account. Interestingly, there is an ``optimal'' way of doing this, namely Levin's universal search algorithm, which tests solution candidates in order of their Levin complexities: L. A. Levin. Universal sequential search problems, Problems of Information Transmission 9:3,265-266,1973. For finding Occam's razor neural networks with minimal Levin complexity, see J. Schmidhuber: Discovering solutions with low Kolmogorov complexity and high generalization capability. In A.Prieditis and S.Russell, editors, Machine Learning: Proceedings of the 12th International Conference, 488--496. Morgan Kaufmann Publishers, San Francisco, CA, 1995. For Occam's razor solutions of non-Markovian reinforcement learning tasks, see M. Wiering and J. Schmidhuber: Solving POMDPs using Levin search and EIRA.In Machine Learning: Proceedings of the 13th International Conference. Morgan Kaufmann Publishers, San Francisco, CA, 1996, to appear." ********************************************** On May 20 Sydney Lamb replied to Jonathan Stein and others: " There seems to be some differing information coming from different sources. The way I heard it, the typical person has lost only about 3% of the original total of cortical neurons after about 70 or 80 years. As for the argument about distributed processing, two comments: (1) there are different kinds of distributive processing; one of them also uses strict localization of points of convergence for distributed subnetworks of information (cf. A. Damasio 1989 --- several papers that year). (2) If the brain is like other biological systems, the neurons being lost are probably most the ones not being used --- ones that have been remaining latent and available to assume some function, but never called upon. Hence what you get with old age is not so much loss of information as loss of ability to learn new things --- varying in amount, of course, from one individual to the next." ***************************************** On May 20 Mark Johnson replies to Raj Rao: "From my reading of the recent literature massive postnatal cell loss in the human cortex is a myth. There is postnatal cortical cell death in rodents, but in primates (including humans) there is only (i) a decreased density of cell packing, and (ii) massive (up to 50%) synapse loss. (The decreased density of cell packing was apparently misinterpreted as cell loss in the past). Of course, there are pathological cases, such as Alzheimers, in which there is cell loss. I have written a review of human postnatal brain development which I can send out on request." ************************************************** *************************************************** APPENDIX We have recently published a set of principles for learning in neural networks/connectionist models that is different from classical connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE Transactions on Neural Networks, to appear; see references below). Below is a brief summary of the new learning theory and why we think classical connectionist learning, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of training examples for learning), is not brain-like at all. Since vigorous and open debate is very healthy for a scientific field, we invite comments for and against our ideas from all sides. "A New Theory for Learning in Connectionist Models" We believe that a good rigorous theory for artificial neural networks/connectionist models should include learning methods that perform the following tasks or adhere to the following criteria: A. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem, since, in general, it is a task performed by the brain. A pre-designed net should not be provided to the method as part of its external input, since it never is an external input to the brain. From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system - a system that is expected to learn "on its own" without any external design assistance. B. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties. Some people might argue that ordinary brains, and particularly those with learning disabilities, do exhibit such problems and that these learning requirements are the attributes only of a "super" brain. The goal of neuroengineers and neuroscientists is to design and build learning systems that are robust, reliable and powerful. They have no interest in creating weak and problematic learning devices that need constant attention and intervention. C. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. For example, one which learns from only 10 examples learns faster than one which requires a 100 or a 1000 examples. We have shown that on-line learning (see references below), when not allowed to store training examples in memory, can be extremely slow in learning - that is, would require many more examples to learn a given task compared to methods that use memory to remember training examples. It is not desirable that a neural network/connectionist learning system be similar in characteristics to learners characterized by such sayings as "Told him a million times and he still doesn't understand." On-line learning systems must learn rapidly from only a few examples. D. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and Papert[1988]). It must be able to both design and train an appropriate net in polynomial time. That is, given P examples, the learning time (i.e. both design and training time) should be a polynomial function of P. This, again, is a critical computational property from a neuroengineering and neuroscience point of view. This property has its origins in the belief that biological systems (insects, birds for example) could not be solving NP-hard problems, especially when efficient, polynomial time learning methods can conceivably be designed and developed. E. Generalization in Learning: The method must be able to generalize reasonably well so that only a small amount of network resources is used. That is, it must try to design the smallest possible net, although it might not be able to do so every time. This must be an explicit part of the algorithm. This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task. General Comments This theory defines algorithmic characteristics that are obviously much more brain-like than those of classical connectionist theory, which is characterized by pre-defined nets, local learning laws and memoryless learning (no storing of actual training examples for learning). Judging by the above characteristics, classical connectionist learning is not very powerful or robust. First of all, it does not even address the issue of network design, a task that should be central to any neural network/connectionist learning theory. It is also plagued by efficiency (lack of polynomial time complexity, need for excessive number of teaching examples) and robustness problems (local minima, oscillation, catastrophic forgetting, lost memories), problems that are partly acquired from its attempt to learn without using memory. Classical connectionist learning, therefore, is not very brain-like at all. As far as I know, there is no biological evidence for any of the premises of classical connectionist learning. Without having to reach into biology, simple common sense arguments can show that the ideas of local learning, memoryless learning and predefined nets are impractical even for the brain! For example, the idea of local learning requires a predefined network. Classical connectionist learning forgot to ask a very fundamental question - who designs the net for the brain? The answer is very simple: Who else, but the brain itself! So, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself! (By the way, this is not a criticism of constructive algorithms that do design nets.) Under classical connectionist learning, a net has to be constructed (by someone, somehow - but not by the algorithm!) prior to having seen a single training example! I cannot imagine any system, biological or otherwise, being able to construct a net with zero information about the problem to be solved and with no knowledge of the complexity of the problem. (Again, this is not a criticism of constructive algorithms.) A good test for a so-called "brain-like" algorithm is to imagine it actually being part of a human brain. Then examine the learning phenomenon of the algorithm and compare it with that of the human's. For example, pose the following question: If an algorithm like back propagation is "planted" in the brain, how will it behave? Will it be similar to human behavior in every way? Look at the following simple "model/algorithm" phenomenon when the back- propagation algorithm is "fitted" to a human brain. You give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again." And you ask: "Which examples should I go over again?" And this "back prop fitted" brain replies: "You need to go over all of them. I don't remember anything you told me." So you go over the teaching examples again. And let's say it gets stuck in a local minimum again and, as usual, does not remember any of the past examples. So you provide the teaching examples again and this process is repeated a few times until it learns properly. The obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? Are the interactions with this so-called "brain- like" algorithm similar to what one would actually encounter with a human in a similar situation? If the interactions are not similar, then the algorithm is not brain-like. A so-called brain-like algorithm's interactions with the external world/teacher cannot be different from that of the human. In the context of this example, it should be noted that storing/remembering relevant facts and examples is very much a natural part of the human learning process. Without the ability to store and recall facts/information and discuss, compare and argue about them, our ability to learn would be in serious jeopardy. Information storage facilitates mental comparison of facts and information and is an integral part of rapid and efficient learning. It is not biologically justified when "brain-like" algorithms disallow usage of memory to store relevant information. Another typical phenomenon of classical connectionist learning is the "external tweaking" of algorithms. How many times do we "externally tweak" the brain (e.g. adjust the net, try a different parameter setting) for it to learn? Interactions with a brain-like algorithm has to be brain-like indeed in all respect. The learning scheme postulated above does not specify how learning is to take place - that is, whether memory is to be used or not to store training examples for learning, or whether learning is to be through local learning at each node in the net or through some global mechanism. It merely defines broad computational characteristics and tasks (i.e. fundamental learning principles) that are brain-like and that all neural network/connectionist algorithms should follow. But there is complete freedom otherwise in designing the algorithms themselves. We have shown that robust, reliable learning algorithms can indeed be developed that satisfy these learning principles (see references below). Many constructive algorithms satisfy many of the learning principles defined above. They can, perhaps, be modified to satisfy all of the learning principles. The learning theory above defines computational and learning characteristics that have always been desired by the neural network/connectionist field. It is difficult to argue that these characteristics are not "desirable," especially for self-learning, self- contained systems. For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. It essentially tries to redefine the nature of algorithms considered to be brain- like. And it defines the foundations for developing truly self- learning systems - ones that wouldn't require constant intervention and tweaking by external agents (human experts) for it to learn. It is perhaps time to reexamine the foundations of the neural network/connectionist field. This mailing list/newsletter provides an excellent opportunity for participation by all concerned throughout the world. I am looking forward to a lively debate on these matters. That is how a scientific field makes real progress. Asim Roy Arizona State University Tempe, Arizona 85287-3606, USA Email: ataxr@asuvm.inre.asu.edu References 1. Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network Learning Theory and a Polynomial Time RBF Algorithm. IEEE Transactions on Neural Networks, to appear. 2. Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to Generate Radial Basis Function (RBF)-like Nets for Classification Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202. 3. Roy, A., Kim, L.S. & Mukhopadhyay, S. 1993. A Polynomial Time Algorithm for the Construction and Training of a Class of Multilayer Perceptrons. Neural Networks, Vol. 6, No. 4, pp. 535- 545. 4. Mukhopadhyay, S., Roy, A., Kim, L.S. & Govil, S. 1993. A Polynomial Time Algorithm for Generating Neural Networks for Pattern Classification - its Stability Properties and Some Test Results. Neural Computation, Vol. 5, No. 2, pp. 225-238. From jbower@bbb.caltech.edu Fri May 31 10:14:47 1996 Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id KAA11294 for ; Fri, 31 May 1996 10:13:42 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.6.12/8.6.12) with SMTP id KAA09618 for ; Fri, 31 May 1996 10:13:37 -0500 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ac13166; 31 May 96 0:03:08 EDT Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa13151; 30 May 96 23:44:29 EDT Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14682; 30 May 96 23:44:02 EDT Message-Id: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Date: Thu, 30 May 1996 10:13:32 -0800 Subject: CNS*96 (Computational Neuroscience Meeting) To: Connectionists@cs.cmu.edu From: jbower@bbb.caltech.edu Call for Registration CNS*96 Cambridge, Massachuetts July 14-17 1996 CNS*96: Registration is now open for this year's Computational Neuroscience meeting (CNS*96). This is the fifth in a series of annual inter-disciplinary conferences intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience, The meeting will take place at the Cambridge Center Marriott Hotel and includes plenary, contributed, and poster sessions. In addition, two half days will be devoted to informal workshops on a wide range of subjects. The first session starts at 9 am, Sunday July 14th and the last session ends at 5 pm on Wednesday, July 17th. Day care will be available for children. Overall Agenda This year's meeting is anticipated to be the best meeting yet in this series. Submitted papers increased by more than 80% this year, with representation from many if not most of major institutions involved in computational neuroscience. All papers submitted to the meeting were peer reviewed, resulting in 230 papers to be presented in either oral or poster form . These papers represent contributions by both experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in understanding how biological neural systems compute. The agenda is well represented by experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. Full information on the agenda\ is available on the meeting's web page (http://www.bbb.caltech.edu/cns96/cns96.html). Invited Speakers Invited speakers for this year's meeting include: Eve Marder (Brandeis University), Miguel Nicoleis (Duke University Medical Center), Joseph J. Atick (Rockefeller University), Ron Calabrese (Emory University), John S. Kauer (Tufts Medical School), Ken Nakamura (Harvard University), Howard Eichenbaum (State University of New York). Poster Presentations More than 200 poster presentations on a wide variety of topics related to computational neuroscience will be presented at this year's meeting. Oral presentations Jeffrey B. Colombe (University of Chicago) Philip S. Ulinski Functional Organization of Cortical Microcircuits: II. Anatomical Organization of Feedforward and Feedback Pathways D. C. Somers (MIT) Emanuel V. Todorov, Athanassios G. Siapas, and Mriganka Sur A Local Circuit Integration Approach to Understanding Visual Cortical Receptive Fields Emilio Salinas (Brandeis University) L.F. Abbott Multiplicative Cortical Responses and Input Selection Based on Recurrent Synaptic Connections Leslie C. Osborne (UC Berkeley) John P. Miller The Filtering of Sensory Information by a Mechanoreceptive Array James A. Mazer (MIT) The Integration of Parallel Processing Streams in the Sound Localization System of the Barn Owl Wulfram Gerstner (Institute for Theoretische Physik) Richard Kempter, J. Leo van Hemmen and Hermann Wagner A Developmental Learning Rule for Coincidence Tuning in the Barn Owl Auditory System Allan Gottschalk (University of Pennsylvania Hospital) Information Based Limits on Synaptic Growth in Hebbian Models S. P. Strong (NEC Research Institute) Ronald Koberle, Rob R. de Ruyter van Steveninck, and William Bialek Entropy and Information in Neural Spike Trains Hans Liljenstr=F6m (Royal Institute of Technology) Peter Arhem Investigating Amplifying and Controlling Mechanisms for Random Events in Neural Systems David Terman (Ohio State University) Amit Bose, and Nancy Kopell Functional Reorganization in Thalamocortical Networks: Transition Between Spindling and Delta Sleep Rhythms Angel Alonso (McGill University) Xiao-Jing Wang, Michael M. Guevara, and Brian Craft A Comparative Model Study of Neuronal Oscillations in Septal GABAergic Cells and Entorhinal Cortex Stellar Cells: Contributors to the Theta and Gamma Rhythms Gene Wallenstein ( Harvard University) Michael E. Hasselmo Bursting and Oscillations in a Biophysical Model of Hippocampal Region CA3: Implications for Associative Memory and Epileptiform Activity Mayank R. Mehta (University of Arizona) Bruce L. McNaughton Rapid Changes in Hippocampal Population Code During Behavior: A Case for Hebbian Learning in Vivo Karl Kilborn (University of California, Irvine) Don Kubota, and Richard Granger Parameters of LTP Induction Modulate Network Categorization Behavior Peter Dayan (MIT) Satinder Pal Singh Long Term Potentiation, Navigation, & Dynamic Programming Chantal E. Stern (Harvard Medical School) Michael E. Hasselmo Functional Magnetic Resonance Imaging and Computational Modeling: An Integrated Study of Hippocampal Function Rajesh P. N. Rao (University of Rochester) Dana H. Ballard Cortico-Cortical Dynamics and Learning During Visual Recognition: A Computational Model R.Y. Reis (AT&T Bell Laboratories) Daniel D. Lee, H.S. Seung, B.I. Shraiman, and D.W. Tank Nonlinear Network Models of the Oculomotor Integrator Yair Weiss (MIT) Edward H. Adelson Adaptive Robust Windows: A Model for the Selective Integration of Motion Signals in Human Vision Emanuel V. Todorov (MIT) Athanassios G. Siapas, David C. Somers, and Sacha B. Nelson Modeling Visual Cortical Contrast Adaptation Effects Dieter Jaeger (Caltech) James M. Bower Dual in Vitro Whole Cell Recordings from Cerebellar Purkinje Cells: Artificial Synaptic Input Using Dynamic Current Clamping Xiao-Jing Wang (Brandeis University) Calcium Control of Time-Dependent Input-Out Computation in Cortical Pyramidal Neurons Alexander Protopapas (Caltech) James M. Bower Piriform Pyramidal Cell Response to Physiologically Plausible Spatio-Temporal Patterns of Synaptic Input Ole Jensen (Brandeis University) Marco A. P. Idiart and John E. Lisman A Model for Physiologically Realistic Synaptic Encoding and Retrieval of Sequence Information S. B. Nelson (Brandeis University) J.A. Varela, K. Sen, and L.F. Abbott Synaptic Decoding of Visual Cortical EPSCs Reveals a Potential Mechanism for Contrast Adaptation Nicolas G. Hatsopoulos (Brown University) Jerome N. Sanes and John P. Donoghue Dynamic Correlations in Unit Firing of Motor Cortical Neurons Related to Movement Preparation and Action Adam N. Elga (Princeton University), A. David Redish, and David S. Touretzky A Model of the Rodent Head Direction System Dianne Pawluk (Harvard University) Robert Howe A Holistic Model of Human Touch ************************************************************************ REGISTRATION INFORMATION FOR THE FIFTH ANNUAL COMPUTATION AND COMPUTATIONAL NEUROSCIENCE MEETING CNS*96 JULY 14 - JULY 17, 1995 BOSTON, MASSACHUSETTS ************************************************************************ LOCATION: The meeting will take place at the Boston Marriott in Cambridge, Massachusetts. MEETING ACCOMMODATIONS: Accommodations for the meeting have been arranged at the Boston Marriott. We have reserved a block of rooms at the special rate for all attendees of $126 per night single or double occupancy in the conference hotel (that is, 2 people sharing a room would split the $126!). A fixed number of rooms have been reserved for students at the rate of $99 per night single or double occupancy (yes, that means $50 a night per student!). These student room rates are on a first-come-first-served basis, so we recommend acting quickly to reserve these slots. Also, for some student registrants housing will be available at Harvard University. Thirty single rooms are available on a first-come-first serve basis. Please look at your orange colored sheets for more information. Registering for the meeting, WILL NOT result in an automatic room reservation. Instead you must make your own reservations by returning the enclosed registration sheet to the hotel, faxing, or by contacting: Boston Marriott Cambridge ATTENTION: Reservations Dept. Two Cambridge Center Cambridge, Massachusetts 02142 (617)494-6600 Toll Free: (800)228-9290 =46ax No. (617)494-0036 NOTE: IN ORDER TO GET THE REDUCED RATES, YOU MUST CONFIRM HOTEL REGISTRATIONS BY JUNE 12, 1995. When making reservations by phone, make sure and indicate that you are registering for the CNS*96 meeting. Students will be asked to verify their status on check in with a student ID or other documentation. MEETING REGISTRATION FEES: Registration received on or before June 12, 1995: Student: $ 95 (One Banquet Ticket Included) Regular: $ 225 (One Banquet Ticket Included) Meeting registration after June 12, 1995: Student:: $ 125 (One Banquet Ticket Included) Regular:: $ 250 (One Banquet Ticket Included) BANQUET: Registration for the meeting includes a single ticket to the annual CNS Banquet this year to be held within the Museum of Science on Tuesday evening, July 16th. Additional Banquet tickets can be purchased for $35 each person. ---------------------------------------------------------------------------- CNS*96 REGISTRATION FORM Last Name: =46irst Name: Title: Student___ Graduate Student___ Post Doc___ Professor___ Committee Member___ Other___ Organization: Address: City: State: Zip: Country: Telephone: Email Address: REGISTRATION FEES: Technical Program --July 14 - July 17, 1996 Regular $225 ($250 after June 12th) - One Banquet Ticket Included Student $ 95 ($125 after June 12th) - One Banquet Ticket Included Banquet $ 35 (Additional Banquet Tickets at $35.00 per Ticket) - July 16, 1996 Total Payment: $ Please Indicate Method of Payment: Check or Money Order * Payable in U. S. Dollars to CNS*96 - Caltech * Please make sure to indicate CNS*96 and YOUR name on all money transfers. Charge my card: Visa Mastercard American Express Number: Expiration Date: Name of Cardholder: Signature as appears on card (for mailed in applications): Date: ADDITIONAL QUESTIONS: Previously Attended: CNS*92___ CNS*93___ CNS*94___ CNS*95___ Did you submit an abstract and summary? ( ) Yes ( ) No Title: Do you have special dietary preferences or restrictions (e.g., diabetic, low sodium, kosher, vegetarian)? If so, please note: Some grants to cover partial travel expenses may become available. Do you wish further information? ( ) Yes ( ) No (Please Note: Travel funds will be available for students and postdoctoral fellows presenting papers at the meeting) *******PLEASE FAX OR MAIL REGISTRATION FORM TO: Caltech, Division of Biology 216-76, Pasadena, CA 91125 Attn: Judy Macias =46ax Number: (818) 795-2088 ADDITIONAL INFORMATION can be obtained by: Using our on-line WWW information and registration server at the URL: http://www.bbb.caltech.edu/cns96/cns96.html ftp-ing to our ftp site: yourhost% ftp ftp.bbb.caltech.edu Name: ftp Password: yourname@yourhost.yourside.yourdomain ftp> cd pub/cns96 ftp> ls ftp> get filename Sending Email to: cns96@smaug.bbb.caltech.edu *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 795-2088 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi