From emj@cs.ucsd.edu Mon Nov 27 18:26:36 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Mon, 27 Nov 95 18:26:33 -0600; AA13546 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Mon, 27 Nov 95 18:26:31 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa25692; 27 Nov 95 16:44:17 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25690; 27 Nov 95 16:14:08 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25285; 27 Nov 95 16:13 EST Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa07885; 26 Nov 95 2:04:13 EST Received: from odin.ucsd.edu by EDRC.CMU.EDU id aa22182; 26 Nov 95 2:03:28 EST Received: from triangulum.ucsd.edu by odin.ucsd.edu; id AA19763 sendmail 5.67/UCSDPSEUDO.4-CS via SMTP Sat, 25 Nov 95 23:03:25 -0800 for connectionists@cs.cmu.edu Received: by triangulum (5.67/UCSDPSEUDO.4) id AA04033 for connectionists@cs.cmu.edu; Sat, 25 Nov 95 23:03:22 -0800 Date: Sat, 25 Nov 95 23:03:22 -0800 From: Eric Mjolsness Message-Id: <9511260703.AA04033@triangulum> To: connectionists@cs.cmu.edu Subject: NIPS 95 workshop schedule NIPS-95 Workshop, ** Tentative Schedule ** Statistical and Structural Models in Network Vision Friday, November 1, 1995 Organizers: Eric Mjolsness and Anand Rangarajan URL: http://www-cse.ucsd.edu/users/emj/workshop95.html Overview: Neural network and model-based approaches to vision are usually regarded as opposing tendencies. Whereas neural net methods often focus on images and learned feature detectors, model-based methods concentrate on matching high-level representations of objects and their parts and other intrinsic properties. It is possible that the two approaches can be integrated in the context of statistical models which have the flexibility to represent patterns in both image space and in higher-level object feature spaces. The workshop will examine the possibilities for and progress in formulating such models for vision problems, particularly those models which can result in neural network architectures. *Tentative Schedule* 7:30am - 7:50am Eric Mjolsness, "Workshop Overview" 7:50am - 8:15am Chris Bregler, "Soft Features for Soft Classifiers" 8:15am - 8:40am Hayit Greenspan, "Preprocessing and Learning in Rotation-Invariant Texture Recognition" 8:40am - 9:05am Alan Yuille, "Deformable Templates for Object Recognition: Geometry and Lighting" 9:05am - 9:30am Anand Rangarajan, "Bayesian Tomographic Reconstruction using Mechanical Models as Priors" 9:30am - 4:30pm Excercises 4:30pm - 4:50pm Paul Viola, "Recognition by Complex Features" 4:50pm - 5:15pm Lawrence Staib, "Model-based Parametrically Deformable Boundary Finding" 5:15pm - 5:40pm Steven Gold, "Recognizing Objects with Recurrent Neural Networks by Matching Structural Models" 5:40pm - 6:05pm Robert M. Haralick, "Annotated Computer Vision Data Sets" 6:05pm - 6:30pm Eric Saund, "Representations for Perceptual Level Chunks in Line Drawings" *Abstracts* Chris Bregler, U.C.Berkeley, "Soft Features for Soft Classifiers" Most connectionist approaches that are applied to visual domains either make little use of any preprocessing or are based on very high level input representations. The former solutions are motivated by the concern not to lose any useful information for the final classification and show how powerful such algorithms are in extracting relevant features automatically. "Hard" decisions like edge detectors, line-finders, etc. don't fit into the philosophy of adaptability across all levels. We attempt to find a balance between both extrema and show how mature "soft"-preprocessing techniques like rich sets of scaled and rotated gaussian derivatives, second moment texture statistics, and Hierachical Mixtures of Experts can be applied to the domain of car classification. Steven Gold, Yale University, "Recognizing Objects with Recurrent Neural Networks by Matching Structural Models" Attributed Relational Graphs (ARG) are used to create structural models of objects. Recently developed optimization techniques that have emerged out of the neural network/statistical physics framework are then used to construct algorithms to match ARGs. Experiments conducted on ARGs generated from images are presented. Hayit Greenspan, Caltech, "Preprocessing and Learning in Rotation-Invariant Texture Recognition" A number of texture recognition systems have been recently proposed in the literature, giving very high-accuracy classification rates. Almost all of these systems fail miserably when invariances, such as in rotation or scale, are to be included; invariant recognition is clearly the next major challenge in recognition systems. Rotation invariance can be achieved in one of two ways, either by extracting rotation-invariant features, or by appropriate training of the classifier to make it "learn" invariant properties. Learning invariances from the raw data is substantially influenced by the rotation angles that have been included in the system's training set. The more examples, the better the performance. We compare to that a mechanism to extract the rotation-invariant features {\em prior} to the learning phase. We introduce a powerful image representation space with the use of a steerable filter set; along with a new encoding scheme to extract the invariant features. Our results strongly indicate the advantage of extracting a powerful image-representation prior to the learning process; with savings in both storage and computational complexity. Rotation-invariant texture recognition results and demo will be shown. Robert M. Haralick, University of Washington, "Annotated Computer Vision Data Sets" Recognizing features with a protocol that learns from examples, requires that there be many example instances. In this talk, we describe the 78 image annotated data set of RADIUS images, of 2 different 3D scenes, which we have prepared for CDROM distribution. The feature annotation includes building edges, shadow edges, clutter edges, and corner positions. As well the image data set has photogrammetric data of corresponding 3D and 2D points and corresponding 2D pass points. The interior orientation and exterior orientation parameters for all images are given. The availability of this data set makes possible comparison of different algorithms and makes possible very careful experiments of feature extraction using neural net approaches. Eric Mjolsness, "Workshop Overview" I will introduce the workshop and discuss possibilities for integration between some of the research directions represented by the participants. Anand Rangarajan, Yale University, "Bayesian Tomographic Reconstruction using mechanical models as priors" We introduce a new prior---the weak plate---to tomographic reconstruction. MAP estimates are obtained via a deterministic annealing Generalized EM algorithm which avoids poor local minima. Bias/variance simulation results on an autoradiograph phantom demonstrate the superiority of the weak plate prior over other first-order priors used in the literature. Eric Saund, Xerox Palo Alto Research Center, "Representations for Perceptual Level Chunks in Line Drawings" In a line drawing, what makes a box a box? A perfectly drawn box is easy to recognize because it presents a remarkable conjunction of crisp spatial properties yielding a wealth of necessary and sufficient conditions to test. But if it is drawn sloppily, many ideal properties such as closure, squareness, and straightness of the sides, give way. In addressing this problem, most attendants of NIPS probably would look to the supple adaptability and warm fuzziness of statistical approaches over the cold fragility of logic-based specifications. Even in doing this, however, some representations generalize better than others. This talk will address alternative representations for visual objects presented at an elemental level as curvilinear lines, with a look to densely overlapping distributed representations in which a large number of properties can negotiate their relative significance. Lawrence Staib, Yale University, "Model-based Parametrically Deformable Boundary Finding" This work describes a global shape parametrization for smoothly deformable curves and surfaces. This representation is used as a model for geometric matching to image data. The parametrization represents the curve or surface using sinusoidal basis functions and allows a wide variety of smooth boundaries to be described with a small number of parameters. Extrinsic model-based information is incorporated by introducing prior probabilities on the parameters based on a sample of objects. Boundary finding is then formulated as an optimization problem. The method has been applied to synthetic images and three-dimensional medical images of the heart and brain. Paul Viola, Salk Institute, "Recognition by Complex Features" From gem@cogsci.indiana.edu Mon Nov 27 20:17:10 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Mon, 27 Nov 95 20:17:00 -0600; AA14545 Message-Id: <9511280216.AA22874@lucy.cs.wisc.edu> Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Mon, 27 Nov 95 20:16:58 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa25700; 27 Nov 95 16:54:20 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25694; 27 Nov 95 16:14:57 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25289; 27 Nov 95 16:14 EST Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa13392; 26 Nov 95 18:44:29 EST Received: from horsie.cogsci.indiana.edu by EDRC.CMU.EDU id aa23762; 26 Nov 95 18:44:07 EST Received: by horsie.cogsci.indiana.edu (4.1/9.2jsm) id AA03535; Sun, 26 Nov 95 18:45:01 EST Date: Sun, 26 Nov 95 18:45:01 EST From: Gary McGraw To: Connectionists@CS.cmu.edu Subject: Letter recognition thesis Announcing the availability of my thesis on the web... Letter Spirit (part one): Emergent High-level Perception of Letters Using Fluid Concepts Gary McGraw Center for Research on Concepts and Cognition 510 North Fess Street Indiana University Bloomington, IN 47405 This thesis presents initial work on the Letter Spirit project, with a cognitive model of letter perception as its centerpiece. The Letter Spirit project is an attempt to model central aspects of human high-level perception and creativity on a computer, focusing on the creative act of artistic letter-design. The aim is to model the process of rendering the 26 lowercase letters of the roman alphabet in many different, internally coherent styles. Two important and orthogonal aspects of letterforms are basic to the project: the categorical sameness} possessed by instances of a single letter in various styles (e.g., the letter `a' in Times, Palatino, and Helvetica) and the stylistic sameness} possessed by instances of various letters in a single style, or spirit (e.g., the letters `a', `k', and `q' in Times, alone). Starting with one or more seed letters representing the beginnings of a style, the program will attempt to create the rest of the alphabet in such a way that all 26 letters share the same style, or spirit. Letters in the domain are formed exclusively from straight segments on a grid in order to make decisions smaller in number and more discrete. This restriction allows much of low-level vision to be bypassed and forces concentration on higher-level cognitive processing, particularly the abstract and context-dependent character of concepts. The overall architecture of the Letter Spirit project, based on the principles of emergent computation and flexible context-sensitive concepts, has been carefully developed and is presented in Part I. Creating a gridfont is an iterative process of guesswork and evaluation --- the ``central feedback loop of creativity''. The notion of evaluation and its necessary foundation in perception have been largely overlooked in many cognitive models of creativity. In order to be truly creative, a program must do its own evaluating and perceiving. To this end, we have focused initial Letter Spirit work on a high-level perceptual task --- letter perception. We have developed an emergent model of letter perception based on the hypothesis that letter categories are made up of conceptual constituents, called roles}, which exert clear top-down influence on the segmentation of letterforms into structural components. Part II introduces the role hypothesis, presents experimental psychological evidence supporting it, and then introduces a complex cognitive model of letter perception that is consistent with the empirical results. Because we are interested ultimately in the design of letters (and the creative process as whole) an effort was made to develop a model rich enough to be able to recognize and conceptualize a large range of letters including letters at the fringes of their categories. -------------------------------------------------------------------------- The ~400 page thesis is available on the web at URL: It may also be retrieved through ftp to ftp.cogsci.indiana.edu as the file pub/mcgraw.thesis.ps.gz Hardcopy is generally not available due to prohibitive costs. However, if you are having trouble retrieving the text, send me e-mail (gem@cogsci.indiana.edu). Comments and questions are welcome. Gary McGraw *---------------------------------------------------------------------------* | Gary McGraw gem@cogsci.indiana.edu | (__) | |--------------------------------------------------| (oo) | | Center for Research on Concepts and Cognition | /-------\/ | | Department of Computer Science | / | || | | Indiana University (812) 855-6966 | * ||----|| | | | ^^ ^^ | *---------------------------------------------------------------------------* From sayegh@CVAX.IPFW.INDIANA.EDU Tue Nov 28 15:01:50 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 28 Nov 95 15:01:12 -0600; AA25977 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 28 Nov 95 15:01:07 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa25718; 27 Nov 95 17:07:58 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25716; 27 Nov 95 16:59:15 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25306; 27 Nov 95 16:59 EST Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa14066; 26 Nov 95 19:54:25 EST Received: from cvax.ipfw.indiana.edu by CS.CMU.EDU id aa19953; 26 Nov 95 19:54:02 EST Received: by CVAX.IPFW.INDIANA.EDU (MX V4.1 VAX) id 51; Sun, 26 Nov 1995 19:53:58 EST Date: Sun, 26 Nov 1995 19:53:57 EST From: sayegh@CVAX.IPFW.INDIANA.EDU To: connectionists@cs.cmu.edu Message-Id: <00999FC1.6320AA7C.51@CVAX.IPFW.INDIANA.EDU> Subject: NN distance learning course announcement FOUNDATIONS AND APPLICATIONS OF NEURAL NETWORKS Course announcement This course is to be offered in the Spring of 1996. Students at remote sites will receive and view lecture tapes at their convenience. Handouts, homework and other assignments will be handled via a web site. This is a 500 level course open to both seniors and graduate students in the Sciences, Mathematics, Engineering, Computer Science, and Psychology or professionals interested in the topic, provided they meet the prerequisites or obtain the instructor's permission. The course is listed as PHYS 525 at Purdue. Please contact the instructor if you are interested. Instructor: Dr. Samir Sayegh sayegh@cvax.ipfw.indiana.edu Phone: (219) 481-6157 FAX: (219) 481-6800 Description: In the last ten years Neural Networks have become both a powerful practical tool to approach difficult classification, optimization and signal processing problems as well as a serious paradigm for computation in parallel machines and biological networks. This course is an introduction to the main concepts and algorithms of neural networks. Lengthy derivations and formal proofs are avoided or minimized and an attempt is made to emphasize the connection between the "artificial" network approaches and their neurobiological counterparts. In order to help achieve that latter goal, the text "A Vision of the Brain" by Semir Zeki is required reading, in addition to the main text "Introduction to the Theory of Neural Computation" by Herz, Krogh and Palmer, and the instructor's (valuable) notes. Zeki's book recounts the fascinating (hi)story of the discovery of the color center in the human visual cortex and emphasizes very general organizational principles of neuroanatomy and neurophysiology, which are highly relevant to any serious computational approach. The following classic topics are covered: - Introduction to the brain and its simpler representations - Neural Computational Elements and Algorithms - Perceptron - Adaptive Linear Element - Backpropagation - Hopfield Model, Associative Memory and Optimization - Kohonen networks - Unsupervised Hebbian learning and principal component analysis - Applications in signals, speech, robotics and forecasting. - Introduction to Computational Neuroscience - Introduction to functional neuroanatomy and functional imaging - Introduction to the visual pathways and computation in retina and visual cortex. Prerequisites: Calculus, matrix algebra and familiarity with a computer language Texts: "A Vision of the Brain" by Semir Zeki (Blackwell, 1993) "Introduction to the Theory of Neural Computation" by Herz, Krogh and Palmer (Addison Wesley, 1991) Instructor's (valuable) notes. Testing: Each lecture comes with a handout that includes a list of objectives and a set of multiple choice questions. The two take-home midterm exams and the final exam will be mostly multiple choice with the questions reflecting the lecture objectives. In addition each student will be expected to complete an individual project in the area of her/his interest. The project may or may not be part of the final grade depending on the project's progress. Software: While students are welcome to use the language of their choice, the high level language MATLAB and the associated toolbox for Neural Networks will be provided for the duration of the course at no additional charge. Cost (US $) Indiana Resident Non resident Undergraduate 249.45 644.450 Graduate 315. 60 751.05 Appendix (brief intro): Neural Networks provide a fruitful approach to a variety of engineering and scientific problems that have been traditionally considered difficult. While an exact definition remains elusive and different practitioners would emphasize one or another of the characteristics of NN, it is possible to list the most common and some of the most fundamental features of neural network solutions: 1) Adaptive 2) Parallel 3) Neurally inspired 4) Ability to handle non-linear problems in a transparent way Let us look at these in some detail: 1) Adaptive solutions are desirable in a number of situations. They present advantages of stability as well as the ability to deal with huge data sets with minimal memory requirements, as the patterns are presented "one at a time." The same advantage implies the possibility of developing real time on-line solutions where the totality of the data set is not available at the outset. 2) The formulation of neural networks solutions is inherently parallel. A large number of nodes share the burden of a computation and often can operate independent of information made available by other nodes. This clearly speeds up computation and allows implementation on highly efficient parallel hardware. 3) Though the extent is somewhat debated, it is clear that there is some similarities between current artificial neural algorithms and biological systems capable of intelligence. The fact that such biological systems still display pattern recognition capabilities far beyond those of our algorithms is a continuing incentive to maintain and further explore the neurobiological connection. 4) The ability to handle nonlinearity is a fundamental requirement of modern scientific and engineering approaches. In a number of fields, the nonlinear approaches are developed on a case by case basis and have often little connection to the better established linear techniques. On the other hand, with the general approach of formulating a neural network and endowing it with increasingly complex processing capabilities, it is possible to define a unified spectrum extending from linear networks (say a one weight-layer ADALINE) to highly nonlinear ones with powerful processing capabilities (say a multilayer backpropagation network). The combination of the properties outlined coupled to the near universal model of neural networks and the availability of software and hardware tools make NN one of the most attractive instruments of signal processing and pattern recognition available today. From rao@cs.rochester.edu Tue Nov 28 15:01:51 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 28 Nov 95 15:01:13 -0600; AA25981 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 28 Nov 95 15:01:11 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa25766; 27 Nov 95 17:16:49 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25754; 27 Nov 95 17:01:37 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25313; 27 Nov 95 17:00 EST Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa14951; 26 Nov 95 22:01:22 EST Received: from cayuga.cs.rochester.edu by RI.CMU.EDU id aa02215; 26 Nov 95 22:01:11 EST Received: from vulture.cs.rochester.edu (vulture.cs.rochester.edu [192.5.53.47]) by cayuga.cs.rochester.edu (8.6.9/H) with ESMTP id WAA27800; Sun, 26 Nov 1995 22:01:09 -0500 Received: (from rao@localhost) by vulture.cs.rochester.edu (8.6.9/K) id WAA12985; Sun, 26 Nov 1995 22:01:07 -0500 Date: Sun, 26 Nov 1995 22:01:07 -0500 From: rao@cs.rochester.edu Message-Id: <199511270301.WAA12985@vulture.cs.rochester.edu> To: connectionists@cs.cmu.edu, neuron@CATTELL20.psych.upenn.edu Subject: Paper Available: Dynamic Model of Visual Cortex Dynamic Model of Visual Memory Predicts Neural Response Properties In The Visual Cortex Rajesh P.N. Rao and Dana H. Ballard Department of Computer Science University of Rochester Rochester, NY 14627-0226, USA Technical Report 95.4 National Resource Laboratory for the study of Brain and Behavior (November 1995) Abstract Recent neurophysiological experiments have shown that the responses of visual cortical neurons in a monkey freely viewing a natural scene can differ substantially from those obtained when the same image subregions are flashed while the monkey performs a fixation task. Neurophysiological research in the past has been based predominantly on cell recordings obtained during fixation tasks, under the assumption that this data would be useful in predicting the responses in more general situations. It is thus important to understand the differences revealed by the new findings and their relevance to the study of visual perception. We describe a computational model of visual memory which dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to achieve optimal estimation of current state by using a Kalman filter-based framework. Computer simulations of the proposed model are shown to correlate closely with the reported neurophysiological observations in both free-viewing and fixating conditions. The model posits a role for the hierarchical structure of the visual cortex and its reciprocal connections between adjoining visual areas in determining the response properties of visual cortical neurons. ======================================================================== Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/dynmem.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/dynmem.ps.Z 10 pages; 309K compressed, 646K uncompressed e-mail: rao@cs.rochester.edu Hardcopies available upon request at the above address or from the first author at NIPS*95. ========================================================================= From eric@research.nj.nec.com Tue Nov 28 15:01:52 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 28 Nov 95 15:01:40 -0600; AA25987 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 28 Nov 95 15:01:13 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab25766; 27 Nov 95 17:17:50 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25757; 27 Nov 95 17:04:40 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25317; 27 Nov 95 17:04 EST Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa21756; 27 Nov 95 10:54:59 EST Received: from zingo.nj.nec.com by RI.CMU.EDU id aa04240; 27 Nov 95 10:53:50 EST Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16) id KAA06554(zingo); Mon, 27 Nov 1995 10:51:03 -0500 Received: by yin (940816.SGI.8.6.9/cliff's joyful mailer #2) id KAA13177(yin); Mon, 27 Nov 1995 10:51:01 -0500 Date: Mon, 27 Nov 1995 10:51:01 -0500 From: "Eric B. Baum" Message-Id: <9511271051.ZM13175@yin> In-Reply-To: Barak Pearlmutter "Re: Response to no-free-lunch discussion" (Nov 19, 3:03pm) References: <199511192303.PAA07609@valaga.salk.edu> X-Mailer: Z-Mail (3.2.0 26oct94 MediaMail) To: eric@research.nj.nec.com, Barak.Pearlmutter@scr.siemens.com, Connectionists@cs.cmu.edu Subject: Re: Response to no-free-lunch discussion Cc: dhw@santafe.edu Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Barak Pearlmutter remarked that saying We have *no* a priori reason to believe that targets with "low Kolmogorov complexity" (or anything else) are/not likely to occur in the real world. (which I gather was a quote from David Wolpert?) is akin to saying we have no a priori reason to believe there is non-random structure in the world, which is not true, since we make great predictions about the world. Wolpert replied: > To illustrate just one of the possible objections to measuring > randomness with Kolmogorov complexity: Would you say that a > macroscopic gas with a specified temperature is "random"? To describe > it exactly takes a huge Kolmogorov complexity. And certainly in many > regards its position in phase space is "nothing but noise". (Indeed, > in a formal sense, its position is a random sample of the Boltzmann > distribution.) Yet Physicists can (and do) make extraordinarilly > accurate predictions about such creatures with ease. Somebody else (Jurgen Schmidhuber I think?) argued that a gas does *not* have high Kolmogorov complexity, because its time evolution is predictable. So in a lattice gas model, given initial conditions (which are relatively compact, including compact pseudorandom number generator) one may be able to predict evolution of gas. Two comments: (1) While it may be that in classical Lattice gas models, a gas does not have high Kolmogorov complexity, this is not the origin of the predictability exploited by physicists. Statistical mechanics follows simply from the assumption that the gas is in a random one of the accessible states, i.e. the states with a given amount of energy. So *define* a *theoretical* gas as follows: Every time you observe it,it is in a random accessible state. Then its Kolmogorov complexity is huge (there are many accessible states) but its macroscopic behavior is predictable. (Actually this an excellent description of a real gas, given quantum mechanics.) (2) Point 1 is no solace to those arguing for the relevance of Wolpert's theorem, as I understand it. We observe above that non-randomness arises purely out of statistical ensemble effects. This is non-randomness none-the-less. Consider the problem of learning to predict the pressure of a gas from its temperature. Wolpert's theorem, and his faith in our lack of prior about the world, predict, that any learning algorithm whatever is as likely to be good as any other. This is not correct. Interestingly, Wolpert and Macready's results appear irrelevant/wrong here in an entirely *random*, *play* world. We see that learnable structure arises at a macroscopic level, and that our natural instincts about learning (e.g. linear relationships, cross-validation as opposed to anti-cross validation) hold. We don't need to appeal to experience with physical nature in this play world. We could prove theorems about the origin of structure. (This may even be a fruitful thing to do.) Creatures evolving in this "play world" would exploit this structure and understand their world in terms of it. There are other things they would find hard to predict. In fact, it may be mathematically valid to say that one could mathematically construct equally many functions on which these creatures would fail to make good predictions. But so what? So would their competition. This is not relevant to looking for one's key, which is best done under the lamppost, where one has a hope of finding it. In fact, it doesn't seem that the play world creatures would care about all these other functions at all. What was the Einstein quote wondering about the surprising utility of mathematics in understanding the natural world? Maybe mathematics itself provides an answer? -- ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric@research.nj.nec.com http://www.neci.nj.nec.com:80/homepages/eric/eric.html From jkim@FIZ.HUJI.AC.IL Tue Nov 28 15:01:52 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 28 Nov 95 15:01:42 -0600; AA25993 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 28 Nov 95 15:01:16 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ad25766; 27 Nov 95 17:19:58 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25762; 27 Nov 95 17:06:21 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25325; 27 Nov 95 17:05 EST Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa29484; 27 Nov 95 16:05:09 EST Received: from keter.fiz.huji.ac.il by RI.CMU.EDU id aa05719; 27 Nov 95 16:04:40 EST Received: by keter.fiz.huji.ac.il id AA15158 (5.65c/HUJI 4.143 for connectionists@cs.cmu.edu); Mon, 27 Nov 1995 21:38:09 +0200 Date: Mon, 27 Nov 1995 21:38:09 +0200 From: Jai Won Kim Message-Id: <199511271938.AA15158@keter.fiz.huji.ac.il> To: connectionists@cs.cmu.edu Subject: Preprint announcement: Online-Gibbs Learning Dear Jordan Pollack We would like to post an announcement of a new preprint on your network. We attach below the title, authors as well as the abstract of the paper. Subject: announcement of a new preprint: On-line Gibbs Learning FTP-host: keter.fiz.huji.ac.il FTP-file: pub/ON-LINE-LEARNING/online_gibbs.ps.Z The length of the paper: 4 pages. Thanking you in advance for your help. Regard, Haim Sompolinsky and Jaiwon Kim e-mails : haim@fiz.huji.ac.il, jkim@fiz.huji.ac.il _______________________________________________________________________________o On-line Gibbs Learning J. W. Kim and H. Sompolinsky Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel e-mails: jkim@fiz.huji.ac.il ; haim@fiz.huji.ac.il (Submitted to Physical Review Letters, Nov 95) ABSTRACT We propose a new model of on-line learning which is appropriate for learning of realizable and unrealizable, smooth as well as threshold, functions. Following each presentation of an example the new weights are chosen from a Gibbs distribution with an on-line energy that balances the need to minimize the instantaneous error against the need to minimize the change in the weights. We show that this algorithm finds the weights that minimize the generalization error in the limit of infinite number of examples. The asymptotic rate of convergence is similar to that of batch learning. From omlinc@research.nj.nec.com Tue Nov 28 15:01:53 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 28 Nov 95 15:01:42 -0600; AA25992 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 28 Nov 95 15:01:15 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ac25766; 27 Nov 95 17:18:54 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa25760; 27 Nov 95 17:05:30 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25321; 27 Nov 95 17:04 EST Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa28764; 27 Nov 95 15:18:50 EST Received: from zingo.nj.nec.com by RI.CMU.EDU id aa05529; 27 Nov 95 15:18:03 EST Received: by zingo (940816.SGI.8.6.9/YDL1.4-910307.16) id PAA10224(zingo); Mon, 27 Nov 1995 15:15:28 -0500 Received: by arosa (940816.SGI.8.6.9/cliff's joyful mailer #2) id PAA18113(arosa); Mon, 27 Nov 1995 15:15:28 -0500 Date: Mon, 27 Nov 1995 15:15:28 -0500 From: Christian Omlin Message-Id: <199511272015.PAA18113@arosa> To: connectionists@cs.cmu.edu Subject: paper available Following the announcement of the paper by Wolfgang Maas on the computational power of networks consisting of neurons that communicate via spike trains, we thought the following paper may be of interest to the connectionist community. It can be retrieved from the website http://www.neci.nj.nec.com/homepages/omlin/omlin.html We welcome your comments. -Christian ======================================================================= Training Recurrent Neural Networks with Temporal Input Encodings C.W. Omlin (a,b), C.L. Giles (a,c), B.G. Horne (a) L.R. Leerink (d), T. Lin (a) (a) NEC Research Institute 4 Independence Way Princeton, NJ 08540 (b) CS Department RPI Troy, NY 12180 (c) UMIACS University of Maryland College Park, MD 20742 (d) EE Department The University of Sydney Sydney, NSW 2006 Abstract We investigate the learning of deterministic finite-state automata (DFA's) with recurrent networks with a single input neuron, where each input symbol is represented as a temporal pattern and strings as sequences of temporal patterns. We empirically demonstrate that obvious temporal encodings can make learning very difficult or even impossible. Based on preliminary results, we formulate some hypotheses about increase training time compared to training of networks with multiple input neurons. From Weevey@cris.com Wed Nov 29 16:12:03 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Wed, 29 Nov 95 16:11:59 -0600; AA17550 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Wed, 29 Nov 95 16:11:57 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa27172; 28 Nov 95 13:58:02 EST Received: by TELNET-1.SRV.CS.CMU.EDU id ah27170; 28 Nov 95 13:38:39 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa27138; 28 Nov 95 13:04:57 EST Received: by GS151.SP.CS.CMU.EDU id aa25881; 28 Nov 95 13:03 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25876; 28 Nov 95 12:48 EST Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id ak01259; 27 Nov 95 22:29:16 EST Received: from deathstar-fddi.cris.com by EDRC.CMU.EDU id aa29116; 27 Nov 95 21:47:54 EST Received: from mariner.cris.com by deathstar.cris.com [1-800-745-CRIS (voice)] Errors-To: Weevey@cris.com Date: Mon, 27 Nov 1995 21:47:34 -0500 (EST) From: WEEVEY To: connectionists@cs.cmu.edu Subject: Dissn Research Summary - Primary Visual Cortex In-Reply-To: <9511260703.AA04033@triangulum> Message-Id: Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII The following is the abstract from my dissertation which was completed back in May. More information about this research may be found at the following URL: http://www.cris.com/~Weevey. Sincerely, Eva S. Simmons ************************************************************************ CIRCUITRY STUDIES OF A COMPUTATIONAL MODEL OF THE PRIMARY VISUAL CORTEX Eva Sabrina Simmons, Ph.D. University of Texas at Austin, 1995 Supervisor: Robert E. Wyatt The goals of this project include: (1) proving a procedure for circuit determination of any cytoarchietectonic area of the brain, given certain kinds of data are known as in the computational model being used here, and (2) one circuit of activity will be proposed with three variations on it by changing the connection strength from the standard. Applying the concept of the connection matrix and obtaining basic statistical data about the connec- tions present with respect to presynaptic cells, basic connection data are obtained as the specified anatomy of the cells and random placement in appro- priate layers has allowed. Also, by allowing activity over a period of 20 ms, time propagation data are produced. By keeping a record of activated and deactivated cells at each time step whose minor types have been read-in from a file and by figuring out exactly how each cell was activated, pieces of the circuits can be produced. Later a circuit diagram can be produced from this data. The sets used for this study are: 400 and 2000 cell sets for basic data, and 1000 and 2000 cell sets for variations of connection strength. The following conclusions can be made: (1) The data shows increase in cell type activity with an increase in cell count in the first two time intervals (0.00-5.00 ms). (2) The pattern seen over the time intervals is: The first time interval A (0.00-2.50 ms), is always a period of immense activity. During the second time interval, B (2.55-5.00 ms), the activity continues to be heavy, with no new cell types being activated. The following time inter- vals, C through H (5.05-20.00 ms), moderate activity occurs. (3) The pattern of activity, as found in experiment, is also found here. (4) A pattern of cell type activity is seen when comparing sets to some degree, with some changes depending on cell count and variations in connection strength. (5) The circuits that have been found were as expected in the literature. @}---------- THE SIMMONS FACTOR --------- EVA SABRINA SIMMONS, PH.D. -------{@ WWW Personal Page: http://www.cris.com/~Weevey/index.html @}---- @}---- @}---- WATCH IT, OR IT MIGHT ATTACK!! ;) ---{@ ---{@ ---{@ ---{@ From andreas@sabai.cs.colorado.edu Wed Nov 29 16:12:05 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Wed, 29 Nov 95 16:12:02 -0600; AA17556 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Wed, 29 Nov 95 16:11:59 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa27261; 28 Nov 95 14:30:16 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa27257; 28 Nov 95 14:21:34 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25911; 28 Nov 95 14:21 EST Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa05702; 28 Nov 95 4:44:32 EST Received: from sabai.cs.Colorado.EDU by RI.CMU.EDU id aa08151; 28 Nov 95 4:44:08 EST Received: (from andreas@localhost) by sabai.cs.colorado.edu (8.6.12/8.6.12) id CAA12163; Tue, 28 Nov 1995 02:43:59 -0700 From: Andreas Weigend Message-Id: <199511280943.CAA12163@sabai.cs.colorado.edu> Subject: NIPS Time Series Workshop / Final List To: connectionists@cs.cmu.edu Date: Tue, 28 Nov 1995 02:43:58 -0700 (MST) Cc: yaser@cs.caltech.edu X-Mailer: ELM [version 2.4 PL23] Content-Type: text Content-Length: 3165 NIPS*95 Workshop: NOISY TIME SERIES Organized by Yaser Abu-Mostafa (Caltech) and Andreas Weigend (University of Colorado at Boulder) This workshop extends over both days (December 1 and 2, 1995, Vail, Colorado). URL: http://www.cs.colorado.edu/~andreas/Time-Series/NIPS95_ToC.html Noisy time series is an application domain that presents unique challenges to neural networks and other learning techniques. Even without the noise as a factor, a practical time series suffers from non-stationarity, alternating between different regimes, and behaving differently at different time scales. The high level of noise in many time series complicates matters further by impeding learning and by amplifying the problems of overfitting and local minima. This workshop addresses the theory and techniques for dealing with these issues and uses real-life examples of time series to illustrate the main ideas. Some of the talks of this workshop will appear in a special issue on Noisy Time Series of the International Journal of Neural Systems (World Scientific; submission deadline is January 31, 1996). Furthermore, there will be an extra session (on Saturday during the day, or on Saturday morning) to discuss clearning. The talks are listed in alphabetical order by speaker. Please contact the speakers directly if you are interested in preprints! 1.Yaser Abu-Mostafa: Validation of Volatility Models 2.Yoshua Bengio: Multi-Scale Temporal Models for Long-Term Dependencies 3.Vance Bjorn: Measuring Predictability in Scale 4.David Blum: Noise in EEG Data 5.Peter Bolland: Robust Non-Linear Multivariate Kalman Filter for Arbitrage Identification 6.Neil Burgess: Modeling Co-Integrated Time Series 7.Mark Craven: Understanding Time-Series Networks: A Case Study in Rule Extraction 8.Matt Kennel: Statistical Test for Dynamical Nonstationarity in Observed Time-Series Data 9.Ronny Meir: Error Bounds for Time Series Prediction Using a Mixtures of Experts 10.John Moody: Toward Minimal Prediction Risk: Model Selection and Construction Strategies for Noisy Time Series Problems 11.Barak Pearlmutter: Statistical Limits to Learning Temporal Structure 12.Ashok Srivastava: Improved Time Series Segmentation using Gated Experts with Simulated Annealing 13.Andreas Weigend: Modeling, Learning, and Meaning of Noisy Time Series (Overview talk of workshop) 14.Georg Zimmermann: Neuro versus Neuro Fuzzy Models in Economics _______________________________________________________________________ _/_/_/ _/_/_/ _/_/_/ Prof Andreas Weigend _/ _/ _/ _/ Computer Science Dept _/ _/ _/ University of Colorado _/ _/ _/ Boulder, CO 80309-0430 _/ _/ _/ fax: 303/492-2844 _/ _/ _/ _/ voice: 303/492-2524 _/_/_/ _/_/_/ e-mail: andreas@cs.colorado.edu ____________________________http://www.cs.colorado.edu/~andreas/Home.html From A.Sharkey@dcs.shef.ac.uk Wed Nov 29 16:12:08 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Wed, 29 Nov 95 16:12:04 -0600; AA17558 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Wed, 29 Nov 95 16:12:01 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa27275; 28 Nov 95 14:39:34 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa27263; 28 Nov 95 14:22:24 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25915; 28 Nov 95 14:21 EST Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa08949; 28 Nov 95 9:34:01 EST Received: from gw.dcs.shef.ac.uk by CS.CMU.EDU id aa03159; 28 Nov 95 9:31:55 EST Received: from entropy.dcs.shef.ac.uk by dcs.shef.ac.uk (4.1/DAVE-1.0) id AA17559; Tue, 28 Nov 95 14:31:32 GMT Received: by entropy.dcs.shef.ac.uk (920330.SGI/SMI-4.1) id AA02615; Tue, 28 Nov 95 14:31:51 GMT Date: Tue, 28 Nov 95 14:31:51 GMT From: A.Sharkey@dcs.shef.ac.uk Message-Id: <9511281431.AA02615@entropy.dcs.shef.ac.uk> To: Connectionists@CS.cmu.edu Subject: Special issue of Connection Science ************ COMBINING NEURAL NETS ************ CALL FOR PAPERS: Deadline February 14th 1996 Papers are sought for this special issue of Connection Science. The aim of this special issue is to examine when, how, and why neural nets should be combined. The reliability of neural nets can be increased through the use of both redundant and modular nets, (either trained on the same task under differing conditions, or on different subcomponents of a task). Questions about the exploitation of redundancy and modularity in the combination of nets, or estimators, have both an engineering and a biological relevance, and include the following: * how best to combine the outputs of several nets. * quantification of the benefits of combining. * how best to create redundant nets that generalise differently (e.g. active learning methods). * how to effectively subdivide a task. * communication between neural net modules. * increasing the reliability of nets. * the use of neural nets for safety critical applications. Special issue editor: Amanda Sharkey (Sheffield, UK) Editorial Board: Leo Breiman (Berkeley, USA) Nathan Intrator (Brown, USA) Robert Jacobs (Rochester, USA) Michael Jordan (MIT, USA) Paul Munro (Pittsburgh, USA) Michael Perrone (IBM, USA) David Wolpert (Santa Fe Institute, USA) We solicit either theoretical or experimental papers on this topic. Questions and submissions concerning this special issue should be sent by February 14th 1996 to: Dr Amanda Sharkey,Department of Computer Science,Regent Court,Portobello Street,University of Sheffield,Sheffield, S1 4DP,United Kingdom. Email: amanda@dcs.shef.ac.uk From ruppin@math.tau.ac.il Wed Nov 29 16:12:09 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Wed, 29 Nov 95 16:12:07 -0600; AA17567 Message-Id: <9511292212.AA06902@lucy.cs.wisc.edu> Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Wed, 29 Nov 95 16:12:03 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab27275; 28 Nov 95 14:40:28 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa27266; 28 Nov 95 14:23:24 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa25919; 28 Nov 95 14:22 EST From: Eytan Ruppin Date: Tue, 28 Nov 1995 19:23:55 +0200 To: Connectionists@CS.cmu.edu Subject: MEMORY Adams Super Center for Brain Studies at Tel Aviv University =========================================================== Workshop Announcement MEMORY ORGANIZATION AND CONSOLIDATION: COGNITIVE AND COMPUTATIONAL PERSPECTIVES A workshop on Memory Organization and Consolidation will be held during May 28-30, 1996 at Tel-Aviv University, Israel. This meeting is sponsored by Mr. Branco Weiss. In the last two decades the field of memory research has grown tremendously. This rapid expansion has been manifested in the recognition of the multiplicity of memory systems and the rise in popularity of multiple-memory system analysis, in the ability to trace changes of brain activity during memory performance using novel molecular, electrophysiological and imaging techniques, and in the development of fairly complex models of memory. The planned workshop will address these issues and discuss how memory storage and retrieval processes are organized in the brain. In particular, we shall focus on memory consolidation. This process involves the alteration of memory traces from temporary, `short-term' storage to `long-term' memory stores. It is a fundamental and intriguing process, which is considered to be strongly connected to our ability to form generalizations by learning from examples, and may depend also upon the integrity of specific sleep stages. The process of consolidation has recently become accessible to formal analysis using novel computational neural models. The workshop will provide a meeting ground for both experimental and computational research approaches. Numerous questions arise with regard to consolidation theory: What explanations could it offer? What are its neuronal (molecular, neurochemical) foundations? What are the relations between the consolidation processes and the circadian cycle? What are modulators of consolidation? What insights can be gained from computational models and how can the predictions they make be tested experimentally? The multidisciplinary nature of memory consolidation research, together with recent advancements, make the proposed workshop a promising opportunity for a timely and fruitful exchange of ideas between researchers employing different research methodologies, but sharing common interests in the study of memory. The workshop will consist of a three day meeting, and will include a series of invited talks, a poster session, and discussion panels. We have invited speakers from different disciplines of the Neurosciences who will discuss psychological, neurological, physiological and computational perspectives of the subject. An informal atmosphere will be maintained, encouraging questions and discussions. CURRENTLY CONFIRMED SPEAKERS Martin Albert (Boston) Daniel Amit (Jerusalem and Rome) Yadin Dudai (Weizmann Institute) Yoram Feldon (Zurich and Tel Aviv) Mark Gluck (Rutgers) Michael Hasselmo (Harvard) Avi Karni (NIH) Amos Korzcyn (Tel Aviv) Jay McClelland (CMU) Bruce McNaughton (Arizona) Matti Mintz (Tel Aviv) Morris Moscovitch (Toronto) Richard Thompson (USC) CALL FOR ABSTRACTS Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract, and will be reviewed by the Program Committee. Abstract submissions should ARRIVE by March 31st, 1996, and should be sent to Eytan Ruppin, Dept. of Computer-Science, Tel-Aviv University, Tel-Aviv, Israel, 69978. Program Committee: ----------------- David Horn, Michael Mislobodsky and Eytan Ruppin (Tel-Aviv). Registration and Further Information: ----------------------------------- To register for the workshop, please fill out the registration form attached below and send it to Mrs. Bila Lenczner Adams Super Center for Brain Studies Tel Aviv University Tel Aviv 69978, Israel Tel.: 972-3-6407377 Fax: 972-3-6407932 email:memory@neuron.tau.ac.il The workshop will take place at the Gordon faculty club of Tel Aviv University. The registration fee of $70 covers lunch and refreshments throughout the three days of the workshop. Optionally one may register for $30 covering refreshments only. Since the number of places is limited please register early to secure your participation. Further questions about conference administration should be directed to Mrs. Bila Lenczner. For questions about the workshop technical/scientific content or absract submissions, please contact Eytan Ruppin Dept. of Computer Science Tel-Aviv University, Tel-Aviv, 69978, Israel. Tel.: 972-3-6407864 Fax: 972-3-6409357 email: ruppin@math.tau.ac.il The final workshop program and updated information will be available on a WWW homepage at http://neuron.tau.ac.il/Adams/memory ================================================================== REGISTRATION FORM MEMORY WORKSHOP May 28-30, 1996 Name: ___________________________________________________ Affiliation: ________________________________________________ Address: _________________________________________________ _________________________________________________________ Telephone: ___________________________ Fax: ________________________________ e-mail: ______________________________ ___ $70 Registration fee including lunch ___ $30 Registration fee including refreshments only Amount Enclosed: $________________ MAKE CHECKS PAYABLE TO "Tel Aviv University" From jbower@bbb.caltech.edu Thu Nov 30 14:05:43 1995 Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Thu, 30 Nov 95 14:05:41 -0600; AA04651 Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Thu, 30 Nov 95 14:05:39 -0600 Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa28865; 29 Nov 95 11:59:27 EST Received: from GS151.SP.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa28863; 29 Nov 95 11:40:09 EST Received: from GS151.SP.CS.CMU.EDU by GS151.SP.CS.CMU.EDU id aa26503; 29 Nov 95 11:39 EST Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id ab22473; 28 Nov 95 22:13:13 EST Received: from smaug-gw.caltech.edu by CS.CMU.EDU id aa05298; 28 Nov 95 22:12:28 EST Received: from bbb.caltech.edu (smaug.bbb.caltech.edu) by gateway.bbb.caltech.edu (4.1/SMI-4.0) id AA13335; Tue, 28 Nov 95 19:21:34 PST Received: from [131.215.137.84] (gatorbox4.bbb.caltech.edu) by bbb.caltech.edu (4.1/SMI-4.1) id AA03914; Tue, 28 Nov 95 19:21:06 PST Date: Tue, 28 Nov 95 19:21:05 PST Message-Id: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" To: Connectionists@cs.cmu.edu From: jbower@bbb.caltech.edu Subject: Call for Papers -- CNS*96 ********************************************************************** CALL FOR PAPERS Fifth Annual Computational Neuroscience Meeting CNS*96 July 14 - 17, 1996 Boston, Massachusetts ................ DEADLINE FOR SUMMARIES AND ABSTRACTS: **>> January 25, 1996 <<** ^^^^^^^^^^^^^^^^ This is the fifth annual meeting of an interdisciplinary conference addressing a broad range of research approaches and issues involved in the field of computational neuroscience. These meetings bring together experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in the functioning of biological nervous systems. The peer reviewed papers presented at the conference are all related to understanding how nervous systems compute. As in previous years, CNS*96 will equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The meeting in 1996 will take place at the Cambridge Center Marriott Hotel and include plenary, contributed, and poster sessions. The first session starts at 9 am, Sunday July 14th and the last session ends at 5 pm on Wednesday, July 17th. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. The meeting will include time for informal workshops focused on current issues in computational neuroscience. Travel funds will be available for students and postdoctoral fellows presenting papers at the meeting. Day care will be available for children. SUBMISSION INSTRUCTIONS: With this announcement we solicit the submission of papers for presentation. All papers will be refereed. Authors should send original research contributions in the form of a 1000-word (or less) summary and a separate single page 100 word abstract clearly stating their results. Summaries are for program committee use only. Abstracts will be published in the conference program. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other systems F. Motor Systems and Sensory Motor Integration G. Learning and Memory H. Behavior I. Cognition J. Disease Include addresses of all authors on the front of the summary and the abstract including the E-mail address for EACH author. Indicate on the front of the summary to which author correspondence should be addressed. Also, indicate first author. Program committee decisions will be sent to the corresponding author only. Submissions will not be considered if they lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by surface mail ONLY by sending 6 copies of the abstract and summary to: CNS*96 Submissions Division of Biology 216-76 Caltech Pasadena, CA 91125 ADDITIONAL INFORMATION can be obtained by: o Using our on-line WWW information and registration server at the URL: http://www.bbb.caltech.edu/cns96/cns96.html o ftp-ing to our ftp site: yourhost% ftp ftp.bbb.caltech.edu Name: ftp Password: yourname@yourhost.yoursite.yourdomain ftp> cd pub/cns96 ftp> ls ftp> get filename o Sending Email to: cns96@smaug.bbb.caltech.edu CNS*96 ORGANIZING COMMITTEE: Co-meeting chair / logistics - Mike Hasselmo, Harvard University Co-meeting chair / finances - John Miller, UC Berkeley Co-meeting chair / program - Jim Bower, Caltech Program Committee: Charlie Anderson, Washington University Axel Borst, Max-Planck Inst., Tuebingen, Germany Dennis Glanzman, NIMH/NIH Nancy Kopell, Boston University Christiane Linster, Harvard University Mark Nelson, University of Illinois, Urbana Maureen Rush, California State University, Bakersfield Karen Sigvardt, University of California, Davis Philip Ulinski, University of Chicago Regional Organizers: Europe- Erik De Schutter (Belgium) Middle East - Idan Segev (Jerusalem) Down Under - Mike Paulin (New Zealand) South America - Renato Sabbatini (Brazil) Asia - Zhaoping Li (Hong Kong) ********************************************************************** *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic addresses for: laboratory http://www.bbb.caltech.edu/bowerlab GENESIS: http://www.bbb.caltech.edu/GENESIS science education reform http://www.caltech.edu/~capsi