From hinton@cs.toronto.edu Tue Apr 1 13:48:59 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id NAA17264 for ; Tue, 1 Apr 1997 13:48:53 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id NAA09704 for ; Tue, 1 Apr 1997 13:48:52 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa10830;
1 Apr 97 13:04:56 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa10825;
1 Apr 97 12:48:08 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa11508;
1 Apr 97 12:47:28 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa25099; 1 Apr 97 12:05:25 EST
Received: from yonge.cs.toronto.edu by CS.CMU.EDU id aa12420;
1 Apr 97 12:03:36 EST
Received: from neuron.ai.toronto.edu ([128.100.3.14]) by yonge.cs.toronto.edu with SMTP id <86536-8370>; Tue, 1 Apr 1997 12:03:26 -0500
Received: from localhost by neuron.ai.toronto.edu with SMTP id <809>; Tue, 1 Apr 1997 12:03:21 -0500
To: Mailing List
Subject: new paper available
Date: Tue, 1 Apr 1997 12:03:17 -0500
From: Geoffrey Hinton
Message-Id: <97Apr1.120321edt.809@neuron.ai.toronto.edu>
"Generative Models for Discovering Sparse Distributed Representations"
Geoffrey E. Hinton and Zoubin Ghahramani
Department of Computer Science
University of Toronto
ABSTRACT
We describe a hierarchical, generative model that can be viewed as a
non-linear generalization of factor analysis and can be implemented in
a neural network. The model uses bottom-up, top-down and lateral
connections to perform Bayesian perceptual inference correctly. Once
perceptual inference has been performed the connection strengths can
be updated using a very simple learning rule that only requires
locally available information. We demonstrate that the network learns
to extract sparse, distributed, hierarchical representations.
The paper is available at
http://www.cs.toronto.edu/~hinton/ftp/RGBN.ps.Z
From qian@brahms.cpmc.columbia.edu Tue Apr 1 19:59:45 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id TAA19359 for ; Tue, 1 Apr 1997 19:59:40 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id TAA16049 for ; Tue, 1 Apr 1997 19:59:39 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa11288;
1 Apr 97 19:45:37 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa11286;
1 Apr 97 19:36:47 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa11636;
1 Apr 97 19:36:20 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa28747; 1 Apr 97 15:06:04 EST
Received: from brahms.cpmc.columbia.edu by EDRC.CMU.EDU id aa04050;
1 Apr 97 15:05:09 EST
Received: (from qian@localhost) by brahms.cpmc.columbia.edu (8.7.6/8.7.3) id PAA19974; Tue, 1 Apr 1997 15:07:33 -0700
Date: Tue, 1 Apr 1997 15:07:33 -0700
Message-Id: <199704012207.PAA19974@brahms.cpmc.columbia.edu>
From: Ning Qian
To: Connectionists@cs.cmu.edu
Subject: vision postdoc position at Columbia
Reply-to: qian@brahms.cpmc.columbia.edu
Postdoctoral Position in Visual Psychophysics
Center for Neurobiology and Behavior
Columbia University
New York, NY
A postdoctoral fellowship position in Visual Psychophysics is
available immediately in my lab at Columbia University. The
individual will participate in psychophysics projects that investigate
the mechanisms of motion detection, stereoscopic depth perception and
motion-stereo integration. There will be close interactions between
these projects and the related computational modeling projects in the
same lab. The details of our research interests and publications can
be found at the web site listed below. The funding for the position
is available for two years with the possibility of renewal.
Applicants should have a strong background in visual psychophysics and
should be able to write programs (or adapt our current psychophysics
software package) for generating visual stimuli. Please send a CV,
representative publications and two letters of recommendations to:
Dr. Ning Qian
Center for Neurobiology and Behavior
Columbia University
722 W. 168th St., A730
New York, NY 10032
qian@brahms.cpmc.columbia.edu (email)
212-960-2213 (phone)
212-960-2561 (fax)
I will attend the ARVO meeting in May. The phone number of my hotel
is 954-525-3484. Please email me if you would like to arrange a
meeting there.
*********************************************************************h
The details of our research interests and publications can be found at:
http://brahms.cpmc.columbia.edu
Selected Papers:
Binocular Disparity and the Perception of Depth [Review], Ning Qian,
Neuron, 1997, 18:359-368.
The Effect of Complex Motion Pattern on Speed Perception, Bard J Geesaman
and Ning Qian (submitted to Vision Research).
A Novel Speed Illusion Involving Expansion and Rotation Patterns,
Bard J Geesaman and Ning Qian, Vision Research, 1996, 36:3281-3292.
Transparent Motion Perception as Detection of Unbalanced Motion
Signals I: Psychophysics, Ning Qian, Richard A. Andersen and Edward H.
Adelson, J. Neurosci., 1994, 14:7357--7366.
A Physiological Model for Motion-stereo Integration and a Unified
Explanation of the Pulfrich-like Phenomena, Ning Qian and Richard
A. Andersen, Vision Research, 1997 (in press).
Physiological Computation of Binocular Disparity, Ning Qian and
Yudong Zhu, Vision Research, 1997 (in press).
Binocular Receptive Field Profiles, Disparity Tuning and
Characteristic Disparity, Yudong Zhu and Ning Qian, Neural
Computation, 1996, 8:1647-1677.
Computing Stereo Disparity and Motion with Known Binocular Cell
Properties, Ning Qian, Neural Computation, 1994, 6:390-404.
From cas-cns@cns.bu.edu Wed Apr 2 18:44:38 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id SAA01588 for ; Wed, 2 Apr 1997 18:44:28 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id SAA05663 for ; Wed, 2 Apr 1997 18:44:27 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa12487;
2 Apr 97 17:24:05 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12445;
2 Apr 97 16:49:42 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12449;
2 Apr 97 16:48:46 EST
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa23914;
1 Apr 97 10:51:55 EST
Received: from MAILBOX.SRV.CS.CMU.EDU by MAILBOX.SRV.CS.CMU.EDU id aa12402;
1 Apr 97 10:50:52 EST
Received: from CNS.BU.EDU by MAILBOX.SRV.CS.CMU.EDU id aa12399;
1 Apr 97 10:50:26 EST
Received: from seer by cns.bu.edu (8.8.5/BU-11/2/94)
id KAA26707; Tue, 1 Apr 1997 10:47:29 -0500 (EST)
Message-Id: <3.0.1.32.19970401104517.006b2534@cns.bu.edu>
X-Sender: cas-cns@cns.bu.edu (Unverified)
X-Mailer: Windows Eudora Pro Version 3.0.1 (32)
Date: Tue, 01 Apr 1997 10:45:17 -0500
From: BU - Cognitive and Neural Systems
Subject: VISION, RECOGNITION, ACTION: FINAL CALL
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
**** FINAL CALL FOR REGISTRATION *****
International Conference on
VISION, RECOGNITION, ACTION: NEURAL MODELS OF MIND AND MACHINE
May 28--31, 1997
Sponsored by the
Center for Adaptive Systems
and the
Department of Cognitive and Neural Systems
Boston University
with financial support from
the Defense Advanced Research Projects Agency
and
the Office of Naval Research
This conference will include 21 invited lectures and 88 contributed
lectures and posters by experts on the biology and technology of how
the brain and other intelligent systems see, understand, and act upon
a changing world. The program is listed below.
Since seating at the meeting is limited, early registration is
recommended. To register, please fill out the registration form
below. Student registrations must be accompanied by a letter of
verification from a department chairperson or faculty/research
advisor. If paying by check, mail to: Neural Models of Mind and
Machine, c/o Cynthia Bradford, Boston University, Department of
Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If
paying by credit card, mail to the above address, or fax to (617)
353-7755. The meeting registration fee will help to pay for a
reception, 6 coffee breaks, and the meeting proceedings.
A day of tutorials will be held on Wednesday, May 28. The tutorial
registration fee helps to pay for 2 coffee breaks and a hard copy of
the 7 hours of tutorial viewgraphs.
See the meeting web page at http://cns-web.bu.edu/cns-meeting for
further meeting information.
****************************************
REGISTRATION FORM
(Please Type or Print)
Vision, Recognition, Action: Neural Models of Mind and Machine
Boston University
Boston, Massachusetts
Tutorials: May 28, 1997
Meeting: May 29-31, 1997
Mr/Ms/Dr/Prof:
Name:
Affiliation:
Address:
City, State, Postal Code:
Phone and Fax:
Email:
The conference registration fee includes the meeting program,
reception, coffee breaks, and meeting proceedings. For registered
participants in the conference, the regular tutorial registration fee
is $20 and the student fee is $15. For attendees of only the tutorial,
the regular registration fee is $30 and the student fee is $25. Two
coffee breaks and a tutorial handout will be covered by the tutorial
registration fee.
CHECK ONE:
[ ] $55 Conference plus Tutorial (Regular)
[ ] $40 Conference plus Tutorial (Student)
[ ] $35 Conference Only (Regular)
[ ] $25 Conference Only (Student)
[ ] $30 Tutorial Only (Regular)
[ ] $25 Tutorial Only (Student)
Method of Payment:
[ ] Enclosed is a check made payable to "Boston University".
Checks must be made payable in US dollars and issued by a US
correspondent bank. Each registrant is responsible for any and
all bank charges.
[ ] I wish to pay my fees by credit card (MasterCard, Visa, or
Discover Card only).
Type of card:
Name as it appears on the card:
Account number:
Expiration date:
Signature and date:
****************************************
MEETING SCHEDULE (poster session details follow the oral session schedule)
WEDNESDAY, MAY 28, 1997 (Tutorials):
7:30am---8:30am MEETING REGISTRATION
8:30am--10:00am Stephen Grossberg (Part I):
"Vision, Brain, and Technology"
10:00am--10:30am COFFEE BREAK
10:30am--12:00pm Stephen Grossberg (Part II):
"Vision, Brain, and Technology"
12:00pm---1:15pm LUNCH
1:15pm---3:15pm Gail Carpenter:
"Self-Organizing Neural Networks for Learning,
Recognition, and Prediction: ART Architectures
and Applications"
3:15pm---3:45pm COFFEE BREAK
3:45pm---5:45pm Eric Schwartz:
"Algorithms and Hardware for the Application of
Space-Variant Active Vision to High Performance
Machine Vision"
THURSDAY, MAY 29, 1997 (Invited Lectures and Posters):
7:30am---8:30am MEETING REGISTRATION
8:30am---9:15am Robert Shapley:
"Brain Mechanisms for Visual Perception of Occlusion"
9:15am--10:00am George Sperling:
"An Integrated Theory for Attentional Processes in
Vision, Recognition, and Memory"
10:00am--10:30am COFFEE BREAK AND POSTER SESSION I
10:30am--11:15am Patrick Cavanagh:
"Direct Recognition"
11:15am--12:00pm Stephen Grossberg:
"Perceptual Grouping during Neural Form and
Motion Processing"
12:00pm---1:30pm LUNCH
1:30pm---2:15pm Robert Desimone:
"Neuronal Mechanisms of Visual Attention"
2:15pm---3:00pm Ennio Mingolla:
"Visual Search"
3:00pm---3:30pm COFFEE BREAK AND POSTER SESSION I
3:30pm---4:15pm Patricia Goldman-Rakic:
"The Machinery of Mind: Models from Neurobiology"
4:15pm---5:00pm Larry Squire:
"Brain Systems for Recognition Memory"
5:00pm---8:00pm POSTER SESSION I
FRIDAY, MAY 30, 1997 (Invited and Contributed Lectures):
8:00am---8:30am MEETING REGISTRATION
8:30am---9:15am Lance Optican:
"Neural Control of Rapid Eye Movements"
9:15am--10:00am John Kalaska:
"Reaching to Visual Targets: Cerebral Cortical
Neuronal Mechanisms"
10:00am--10:30am COFFEE BREAK
10:30am--11:15am Rodney Brooks:
"Models of Vision-Based Human Interaction"
11:15am--12:00pm Alex Pentland:
"Interpretation of Human Action"
12:00pm---1:30pm LUNCH
1:30pm---1:45pm Paolo Gaudiano:
"Retinal Processing of IRFPA Imagery"
1:45pm---2:00pm Zili Liu:
"2D Ideal Observers in 3D Object Recognition"
2:00pm---2:15pm Soheil Shams:
"Object Segmentation and Recognition via a Network
of Resonating Spiking Neurons"
2:15pm---2:30pm Wey-Shiuan Hwang and John Weng:
"Autonomous Learning for Visual Attention Selection"
2:30pm---2:45pm Shane W. McWhorter, Theodore J. Doll,
and Anthony A. Wasilewski:
"Integration of Computational Vision Research Models
for Visual Performance Prediction"
2:45pm---3:00pm Frank S. Holman III and Robert J. Marks II:
"Platform Independent Geometry Verification Using
Neural Networks Including Color Visualization"
3:00pm---3:30pm COFFEE BREAK
3:30pm---3:45pm Heiko Neumann and Wolfgang Sepp:
"A Model of Cortico-Cortical Integration of Visual
Information: Receptive Fields, Grouping, and
Illusory Contours"
3:45pm---4:00pm Constance S. Royden:
"A Biological Model for Computing Observer Motion
in the Presence of Moving Objects"
4:00pm---4:15pm Michele Fabre-Thorpe, Ghislaine Richard,
and Simon Thorpe:
"Rapid Categorization of Natural Images in Rhesus
Monkeys: Implications for Models of Visual Processing"
4:15pm---4:30pm Raju S. Bapi and Michael J. Denham:
"Neural Network Model of Experiments on
Set-Shifting Paradigm"
4:30pm---4:45pm Jose L. Contreras-Vidal and George E. Stelmach:
"Adaptive Resonance Theory Computations in the
Cortico-Striatal Circuits are Gated by Dopamine
Activity during Reward-Related Learning of
Approach Behavior"
4:45pm---5:00pm Mingui Sun, Murat Sonmez, Ching-Chung Li,
and Robert J. Sclabassi:
"Application of Time-Frequency Analysis, Artificial
Neural Networks, and Decision Making Theory to
Localization of Electrical Sources in the Brain
Based on Multichannel EEG"
5:00pm---6:30pm MEETING RECEPTION
6:30pm---7:30pm Stuart Anstis Keynote Lecture:
"Moving in Unexpected Directions"
SATURDAY, MAY 31 (Invited Lectures and Posters):
8:00am---8:30am MEETING REGISTRATION
8:30am---9:15am Eric Schwartz:
"Multi-Scale Vortex of the Brain: Anatomy as
Architecture in Biological and Machine Vision"
9:15am--10:00am Terrence Boult:
"Polarization Vision"
10:00am--10:30am COFFEE BREAK AND POSTER SESSION II
10:30am--11:15am Allen Waxman:
"Opponent Color Models of Visible/IR Fusion
for Color Night Vision"
11:15am--12:00pm Gail Carpenter:
"Distributed Learning, Recognition, and Prediction
in ART and ARTMAP Networks"
12:00pm---1:30pm LUNCH
1:30pm---2:15pm Tomaso Poggio:
"Representing Images for Visual Learning"
2:15pm---3:00pm Michael Jordan:
"Graphical Models, Neural Networks, and
Variational Approximations"
3:00pm---3:30pm COFFEE BREAK AND POSTER SESSION II
3:30pm---4:15pm Andreas Andreou:
"Mixed Analog/Digital Neuromorphic VLSI
for Sensory Systems"
4:15pm---5:00pm Takeo Kanade:
"Computational VLSI Sensors: Integrating
Sensing and Processing"
5:00pm---8:00pm POSTER SESSION II
POSTER SESSION I: Thursday, May 29, 1997
All posters will be displayed for the full day.
Biological Vision Session:
#1 Vlad Cardei, Brian Funt, and Kobus Barnard:
"Modeling Color Constancy with Neural Networks"
#2 E.J. Pauwels, P. Fiddelaers, and L. Van Gool:
"Send in the DOGs: Robust Clustering using
Center-Surround Receptive Fields"
#3 Tony Vladusich and Jack Broerse:
"Neural Networks for Adaptive Compensation of Ocular
Chromatic Aberration and Discounting Variable Illumination"
#4 Alexander Dimitrov and Jack D. Cowan:
"Objects and Texture Need Different Cortical Representations"
#5 Miguel Las-Heras, Jordi Saludes, and Josep Amat:
"Adaptive Analysis of Singular Points Correspondence
in Stereo Images"
#6 Neil Middleton:
"Properties of Receptive Fields in Radial Basis Function
(RBF) Networks"
#7 David Enke and Cihan Dagli:
"Modeling the Bidirectional Interactions within and
between the LGN and Area V1 Cells"
#8 Scott Oddo, Jacob Beck, and Ennio Mingolla:
"Texture Segregation in Chromatic Element-Arrangement Patterns"
#9 David Alexander and Phil Sheridan:
"Local from Global Geometry of Layers 2, 3, and 4C of
the Macaque Striate Cortex"
#10 Phil Sheridan and David Alexander:
"Invariant Transformations on a Space-Variant Hexagonal Grid"
#11 Irak Vicarte Mayer and Haruhisa Takahashi:
"Simultaneous Edge Detection and Image Segmentation using
Neural Networks and Color Theory"
#12 Adam Reeves and Shuang Wu:
"Visual Adaptation: Stochastic or Deterministic?"
#13 Peter Kalocsai and Irving Biederman:
"Biologically Inspired Recognition Model with Extension Fields"
#14 Stephane J.M. Rainville, Frederick A.A. Kingdom, and
Anthony Hayes:
"Effects of Local Phase Structure on Motion Perception"
#15 Alex Harner and Paolo Gaudiano:
"A Neural Model of Attentive Visual Search"
#16 Lars Liden, Ennio Mingolla, and Takeo Watanabe:
"The Effects of Spatial Frequency, Contrast, Disparity,
and Phase on Motion Integration between Different Areas
of Visual Space"
#17 Brett R. Fajen, Nam-Gyoon Kim, and Michael T. Turvey:
"Robustness of Heading Perception Along Curvilinear Paths"
#18 L.N. Podladchikova, I.A. Rybak, V.I. Gusakova,
N.A. Shevtsova, and A.V. Golovan:
"A Behavioral Model of Active Visual Perception"
#19 Julie Epelboim and Patrick Suppes:
"Models of Eye Movements during Geometrical Problem Solving"
Biological Learning and Recognition Session:
#20 George J. Kalarickal and Jonathan A. Marshall:
"Visual Classical Rearing and Synaptic Plasticity:
Comparison of EXIN and BCM Learning Rules"
#21 Jean-Daniel Kant and Daniel S. Levine:
"ARTCRITIC: An Adaptive Critic Model for Decision Making
in Context"
#22 L. Andrew Coward:
"Electronic Simulation of Unguided Learning, Associative
Memory, Dreaming, and Internally Generated Succession of
Mental Images"
#23 K. Torii, T. Kitsukawa, S. Kunifuji, and T. Matsuzawa:
"A Synaptic Model by Temporal Coding"
#24 Gabriel Robles-de-la-Torre and Robert Sekuler:
"Learning a Virtual Object's Dynamics: Spectral Analysis
of Human Subjects' Internal Representation"
#25 Sheila R. Cooke, Robert Sekuler, Brendan Kitts,
and Maja Mataric:
"Delayed and Real-Time Imitation of Complex Visual `Gestures' "
#26 Brendan Kitts, Sheila R. Cooke, Maja Mataric,
and Robert Sekuler:
"Improved Pattern Recognition by Combining Invariance Methods"
#27 Gregory R. Mulhauser:
"Can ART Dynamics Create a 'Centre of Cognitive Action'
Capable of Supporting Phenomenal Consciousness?"
#28 Bruce F. Katz:
"The Pleasingness of Polygons"
#29 Stephen L. Thaler:
"Device for the Autonomous Generation of Useful Information"
Control and Robotics Session:
#30 John Demeris and Gillian Hayes:
"Integrating Visual Perception and Action in a Robot
Model of Imitation"
#31 Danil V. Prokhorov and Donald C. Wunsch II:
"A General Training Procedure for Stable Control with
Adaptive Critic Designs"
#32 Juan Cires and Pedro J. Zufiria:
"Space Perception through a Self-Organizing Map for
Mobile Robot Control"
#33 Alex Guazzelli and Michael A. Arbib:
"NeWG: The Neural World Graph"
#34 Minh-Chinh Nguyen:
"Robot Vision Without Calibration"
#35 Erol Sahin and Paolo Gaudiano:
"Real-Time Object Localization from Monocular Camera Motion"
#36 Carolina Chang and Paolo Gaudiano:
"A Neural Network for Obstacle Avoidance in Mobile Robots"
#37 P. Gaussier, J.-P. Banquet, C. Joulain, A. Revel,
and S. Zrehen:
"Validation of a Hippocampal Model on a Mobile Robot"
#38 J.-P. Banquet, P. Gaussier, C. Joulain, and A. Revel:
"Learning, Recognition, and Generation of Tempero-Spatial
Sequences by a Cortico-Hippocampal System: A Neural
Network Model"
POSTER SESSION II: Saturday, May 31, 1997
All posters will be displayed for the full day.
Machine Vision Session:
#1 Tyler C. Folsom:
"Edge Detection by Sparse Sampling with Steerable
Quadrature Filters"
#2 Mario Aguilar and Allen M. Waxman:
"Comparison of Opponent-Color Neural Processing and
Principal Components Analysis in the Fusion of Visible
and Thermal IR Imagery"
#3 Magnus Snorrason:
"A Multi-Resolution Feature Integration Model for the
Next-Look Problem"
#4 Charles B. Owen:
"Application of Multiple Media Stream Correlation to
Functional Imaging of the Brain"
Machine Learning Session:
#5 Mukund Balasubramanian and Stephen Grossberg:
"A Neural Architecture for Recognizing 3-D Objects
from Multiple 2-D Views"
#6 Maartje E.J. Raijmakers and Peter C.M. Molenaar:
"Exact ART: A Complete Implementation of an ART Network"
#7 Danil V. Prokhorov and Lee A. Feldkamp:
"On the Relationship between Derivative Adaptive Critics
and Backpropagation through Time"
#8 Tulay Yildirim and John S. Marsland:
"Optimization by Back Propagation of Error in Conic
Section Functions"
#9 John M. Zachary, Jacob Barhen, Nageswara S. Rao, and
Sitharama S. Iyengar:
"A Dynamical Systems Approach to Neural Network Learning
from Finite Examples"
#10 Christos Orovas and James Austin:
"Cellular Associative Neural Networks"
#11 M.A. Grudin, P.J.G. Lisboa, and D.M. Harvey:
"A Sparse Representation of Human Faces for Recognition"
#12 Mike Y.W. Leung and David K.Y. Chiu:
"Feature Selection for Two-Dimensional Shape Discrimination
using Feedforward Neural Networks"
#13 Robert Alan Brown:
"The Creation of Order in a Self-Learning Duplex Network"
#14 C.H. Chen:
"Designing a Neural Network to Predict Human Responses"
#15 Jean-Marc Fellous, Laurenz Wiskott, Norbert Kruger,
and Christoph von der Malsburg:
"Face Recognition by Elastic Bunch Graph Matching"
#16 Gerard J. Rinkus:
"A Monolithic Distributed Representation Supporting
Multi-Scale Spatio-Temporal Pattern Recognition"
#17 Harald Ruda and Magnus Snorrason:
"Evaluating Automatically Constructed Hierarchies of
Self-Organized Neural Network Classifiers"
#18 Ken J. Tomita:
"A Method for Building an Artificial Neural Network
with 2/3 Dimensional Visualization of Input Data"
#19 Fernando J. Corbacho and Michael A. Arbib:
"Towards a Coherence Theory of the Brain and Adaptive Systems"
#20 Gail A. Carpenter, Mark A. Rubin, and William W. Streilein:
"ARTMAP-FD: Familiarity Discrimination of Radar Range Profiles"
#21 James R. Williamson:
"Multifield ARTMAP: A Network for Local, Incremental,
Constructive Learning"
#22 Marcos M. Campos:
"Constructing Adaptive Orthogonal Wavelet Bases with
Self-Organizing Feature Maps"
#23 Sucharita Gopal, Curtis E. Woodcock, and Alan H. Strahler:
"Fuzzy ARTMAP Classification of Global Land Cover from
AVHRR Data Set"
#24 A.F. Rocha and A. Serapiao:
"Fuzzy Modeling of the Visual Mind"
#25 Eun-Jin Kim and Yillbyung Lee:
"Handwritten Hangul Recognition Based on Psychologically
Motivated Model"
#26 Jayadeva:
"A Nonlinear Programming Based Approach to the Traveling
Salesman Problem"
#27 Haruhisa Takahashi:
"Biologically Plausible Efficient Learning Via Local
Delta Rule"
#28 Raonak Zaman and Donald C. Wunsch II:
"Prediction of Yarn Strength from Fiber Properties
using Fuzzy ARTMAP"
VLSI Session:
#29 James Waskiewicz and Gert Cauwenberghs:
"The Boundary Contour System on a Single Chip:
Analog VLSI Architecture"
#30 Marc Cohen, Pamela Abshire, and Gert Cauwenberghs:
"Current Mode VLSI Fuzzy ART Processor with On-Chip Learning"
#31 Shinji Karasawa, Senri Ikeda, Yong Hea Ku, and Jun Hum Chung:
"Methodology of the Decision-Making Device"
#32 Todd Hinck and Allyn E. Hubbard:
"Circuits that Implement Shunting Neurons and Steerable
Spatial Filters"
Audition, Speech, and Language Session:
#33 Colin Davis and Sally Andrews:
"Competitive and Cooperative Effects of Similarity
in Stationary and Self-Organizing Models of Visual
Word Recognition"
#34 Susan L. McCabe and Michael J. Denham:
"Towards a Neurocomputational Model of Auditory Perception"
#35 Dave Johnson:
"A Wavelet-Based Auditory Planning Space for Production
of Vowel Sounds"
#36 Michael A. Cohen, Stephen Grossberg, and Christopher Myers:
"A Neural Model of Context Effects in Variable-Rate
Speech Perception"
#37 Peter Cariani:
"Neural Computation in the Time Domain"
#38 N.K. Kasabov and R. Kozma:
"Chaotic Adaptive Fuzzy Neural Networks and their Applications
for Phoneme-Based Spoken Language Recognition"
****************************************
MEETING HOTEL INFORMATION: For all hotels listed below, meeting attendees
should make their own reservations directly with the hotel using the
meeting name "Vision, Recognition, Action".
1. THE ELIOT HOTEL
370 Commonwealth Avenue
Boston, MA 02215
(617) 267-1607
(800) 443-5468
Janet Brown, director of sales
$130/night is the Boston University rate, and is the lowest
rate that the Eliot will offer to anyone, whether individual
or group.
A block of 12 rooms is being held until April 28, 1997.
This hotel is 3 or 4 blocks from the CNS Department.
2. HOWARD JOHNSONS
575 Commonwealth Avenue
Boston, MA 02215
(617) 267-3100 (reservations)
(617) 864-0300 (sales office)
Eric Perryman, group sales office
Rates: $115/night/single and $125/night/double.
A block of 15 rooms is being held until April 28, 1997.
This hotel is across the street from the CNS Department.
3. THE BUCKMINSTER
645 Beacon Street
Boston, MA 02215
(617) 236-7050
(800) 727-2825
Dan Betro, group sales office
A block of 29 rooms is being held until April 28, 1997.
This hotel is a few steps away from the CNS Department.
Pricing will vary depending on the kind of room; the range is
$55/night up to $129/night. Please inquire directly with the
hotel when making your reservation.
4. HOLIDAY INN, BROOKLINE
1200 Beacon Street
Brookline, MA 02146
(617) 277-1200
(800) 465-4329
Lisa Pedulla, Director of Sales, x-320
$99/night single, $109/night double are the Boston University rates.
A block of 25 rooms will be held for us until April 28, 1997.
This hotel is within a mile of the CNS Department. There is
a trolley stop directly outside the hotel that will take you
to within a block of the CNS Department.
For information about other Boston-area hotels, please see
http://www.boston.com/travel/lodging.htm.
From john@dcs.rhbnc.ac.uk Wed Apr 2 18:46:30 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id SAA01586 for ; Wed, 2 Apr 1997 18:44:27 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id SAA05658 for ; Wed, 2 Apr 1997 18:44:25 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa12443;
2 Apr 97 17:14:43 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12441;
2 Apr 97 16:48:47 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa12438;
2 Apr 97 16:47:34 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa11440; 2 Apr 97 5:08:10 EST
Received: from [134.219.44.52] by CS.CMU.EDU id aa21242; 2 Apr 97 5:07:40 EST
Received: from localhost (localhost [127.0.0.1])
by platon.cs.rhbnc.ac.uk (8.6.9/8.6.9) with SMTP id KAA31415 ;
Wed, 2 Apr 1997 10:56:14 +0100
From: John Shawe-Taylor
Message-Id: <199704020956.KAA31415@platon.cs.rhbnc.ac.uk>
X-Authentication-Warning: platon.cs.rhbnc.ac.uk: Host localhost didn't use HELO protocol
To: john@dcs.rhbnc.ac.uk, vovk@dcs.rhbnc.ac.uk, alex@dcs.rhbnc.ac.uk,
pete@dcs.rhbnc.ac.uk, dave@dcs.rhbnc.ac.uk, jon@dcs.rhbnc.ac.uk,
m.anthony@lse.ac.uk, Paul.Vitanyi@cwi.nl,
Esko Ukkonen , orponen@igi.tu-graz.ac.at,
Michel.Cosnard@lip.ens-lyon.fr, maass@igi.tu-graz.ac.at,
cesabian@dsi.unimi.it, mauri , vigna@dsi.unimi.it,
Felipe Cucker , Veronique.Bruyere@umh.ac.be,
Christian.Michaux@umh.ac.be, Maurice.Boffa@umh.ac.be,
meer@rwth-aachen.de, gavalda@lsi.upc.es, balqui@lsi.upc.es,
torras@ic.upc.es, bruf@igi.tu-graz.ac.at, jpd@pip.fpms.ac.be,
panizza@cse.ucsc.edu, ferretti@dsi.unimi.it, g.r.brightwell@lse.ac.uk,
hpaugam@lip.ens-lyon.fr, mschmitt@igi.tu-graz.ac.at,
boldi@dsi.unimi.it, Bernard.Girau@lip.ens-lyon.fr,
Tapio.Elomaa@cs.Helsinki.FI, jkivinen@varisluoto.cs.Helsinki.FI,
Petri.Myllymaki@cs.Helsinki.FI, koiran@lip.ens-lyon.fr,
pauer@igi.tu-graz.ac.at, Didier.Puzenat@lip.ens-lyon.fr,
Richard.Baron@lip.ens-lyon.fr, castro@lsi.upc.es, buhrman@cwi.nl,
david@goliat.upc.es, vlavin@lsi.upc.es, carlos@goliat.upc.es,
pdg@cwi.nl, N.L.Biggs@lse.ac.uk, Herman.Ehrenburg@cwi.nl,
gegout@clipper.ens.fr, Olivier.Bournez@lip.ens-lyon.fr,
Jean-Sylvestre.Gakwaya@umh.ac.be,
simon@nereus.informatik.uni-dortmund.de, shai@csa.CS.Technion.AC.IL,
bartlett@deakin.anu.edu.au, williams@faceng.anu.edu.au, lugosi@upf.es,
colt@cs.uiuc.edu, Connectionists@cs.cmu.edu, neur-sci@dl.ac.uk,
comp-neuro@smaug.bbb.caltech.edu,
neuron-request@cattell.psych.upenn.edu
Subject: Technical Report Series in Neural and Computational Learning
Date: Wed, 02 Apr 97 10:56:14 +0100
X-Mts: smtp
The European Community ESPRIT Working Group in Neural and Computational
Learning Theory (NeuroCOLT) has produced a set of new Technical Reports
available from the remote ftp site described below. They cover topics in
real valued complexity theory, computational learning theory, and analysis
of the computational power of continuous neural networks. Abstracts are
included for the titles.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-015:
----------------------------------------
Exact Learning of subclasses of CDNF formulas with membership queries
by Carlos Domingo, Universitat Polit\`ecnica de Catalunya, Spain
Abstract:
We consider the exact learnability of subclasses of Boolean formulas
from membership queries alone. We show how to combine known learning
algorithms that use membership and equivalence queries to obtain new
learning results only with memberships. In particular we show the exact
learnability of read-$k$ monotone CDNF formulas, Sat-$k$ ${\cal O}(\log
n)$-CDNF, and ${\cal O}(\sqrt{\log n})\mbox{-size CDNF}$ from
membership queries only.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-016:
----------------------------------------
Decision Trees have Approximate Fingerprints
by Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain
Vijay Raghavan, Vanderbilt University, USA
Abstract:
We prove that decision trees exhibit the ``approximate fingerprint''
property, and therefore are not polynomially learnable using only
equivalence queries. A slight modification of the proof extends this
result to several other representation classes of boolean concepts
which have been studied in computational learning theory.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-017:
----------------------------------------
Learning Monotone Term Decision Lists
by David Guijarro, Victor Lavin, Universitat Polit\`ecnica de Catalunya, Spain
Vijay Raghavan, Vanderbilt University, USA
Abstract:
We study the learnability of monotone term decision lists in the exact
model of equivalence and membership queries. We show that, for any
constant $k \ge 0$, $k$-term monotone decision lists are exactly and
properly learnable with $n^{O(k)}$ membership queries in O($n^{k^3}$)
time. We also show $n^{\Omega (k)}$ membership queries are necessary
for exact learning. In contrast, both $k$-term monotone decision lists
($k \ge 2$) and general monotone decision lists are not learnable with
equivalence queries alone.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-018:
----------------------------------------
Learning nearly monotone $k$-term DNF
by Jorge Castro, David Guijarro, Victor Lavin,
Universitat Polit\`ecnica de Catalunya, Spain
Abstract:
This note studies the learnability of the class $k$-term DNF with a
bounded number of negations per term. We study the case of learning
with membership queries alone, and give tight upper and lower bounds on
the number of negations that makes the learning task feasible. We also
prove a negative result for equivalence queries. Finally, we show that
a slight modification in our algorithm proves that the considered class
is also learnable in the Simple PAC model, extending Li and Vit\'anyi
result for monotone $k$-term DNF.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-019:
----------------------------------------
$\delta$-uniform BSS Machines
by Paolo Boldi, Sebastiano Vigna, Universit\`a degli Studi di Milano, Italy
Abstract:
A $\delta$-uniform BSS machine is almost like a standard BSS machine,
but the negativity test is replaced by a ``smaller than $-\delta$''
test, where the threshold $\delta\in(0,1)$ is not known: in this way we
represent the impossibility of performing exact equality tests. We
prove that, for any real closed archimedean field $R$, the
$\delta$-uniform semi-decidable sets are exactly the interiors of BSS
semi-decidable sets. Then, we show that the sets semi-decidable by
Turing machines are the sets semi-decidable by $\delta$-uniform
machines with coefficients in $Q$ or $T$, the field of Turing
computable numbers.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-020:
----------------------------------------
The Computational Power of Spiking Neurons Depends on the Shape of the
Postsynaptic Potentials
by Wolfgang Maass, Berthold Ruf, Technische Universitaet Graz, Austria
Abstract:
Recently one has started to investigate the computational power of
spiking neurons (also called ``integrate and fire neurons''). These are
neuron models that are substantially more realistic from the biological
point of view than the ones which are traditionally employed in
artificial neural nets. It has turned out that the computational power
of networks of spiking neurons is quite large. In particular they have
the ability to communicate and manipulate analog variables in
spatio-temporal coding, i.e.~encoded in the time points when specific
neurons ``fire'' (and thus send a ``spike'' to other neurons).
These preceding results have motivated the question which details of
the firing mechanism of spiking neurons are essential for their
computational power, and which details are ``accidental'' aspects of
their realization in biological ``wetware''. Obviously this question
becomes important if one wants to capture some of the advantages of
computing and learning with spatio-temporal coding in a new generation
of artificial neural nets, such as for example pulse stream VLSI.
The firing mechanism of spiking neurons is defined in terms of their
postsynaptic potentials or ``response functions'', which describe the
change in their electric membrane potential as a result of the firing
of another neuron. We consider in this article the case where the
response functions of spiking neurons are assumed to be of the
mathematically most elementary type: they are assumed to be
step-functions (i.e. piecewise constant functions). This happens to be
the functional form which has so far been adapted most frequently in
pulse stream VLSI as the form of potential changes (``pulses'') that
mimic the role of postsynaptic potentials in biological neural
systems. We prove the rather surprising result that in models without
noise the computational power of networks of spiking neurons with
arbitrary piecewise constant response functions is strictly weaker than
that of networks where the response functions of neurons also contain
short segments where they increase respectively decrease in a linear
fashion (which is in fact biologically more realistic). More precisely
we show for example that an addition of analog numbers is impossible
for a network of spiking neurons with piecewise constant response
functions (with any bounded number of computation steps, i.e. spikes),
whereas addition of analog numbers is easy if the response functions
have linearly increasing segments.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-021:
----------------------------------------
On the Effect of Analog Noise in Discrete-Time Analog Computations
by Wolfgang Maass, Technische Universitaet Graz, Austria
Pekka Orponen, University of Jyv\"askyl\"a, Finland
Abstract:
We introduce a model for analog noise in analog computation with
discrete time that is flexible enough to cover the most important
concrete cases, such as noisy analog neural nets and networks of
spiking neurons. We show that the presence of arbitrarily small
amounts of analog noise reduces the power of analog computational
models to that of finite automata, and we also prove a new type of
upper bound for the VC-dimension of computational models with analog
noise.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-022:
----------------------------------------
Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in
Temporal Coding
by Wolfgang Maass and Thomas Natschl"ager,
Technische Universitaet Graz, Austria
Abstract:
A theoretical model for analog computation in networks of spiking
neurons with temporal coding is introduced and tested through
simulations in GENESIS. It turns out that the use of multiple synapses
yields very noise robust mechanisms for analog computations via the
timing of single spikes in networks of detailed compartmental neuron
models.
One arrives in this way at a method for emulating arbitrary Hopfield
nets with spiking neurons in temporal coding, yielding new models for
associative recall of spatio-temporal firing patterns. We also show
that it suffices to store these patterns in the efficacies of
\emph{excitatory} synapses.
A corresponding \emph{layered} architecture yields a refinement of the
synfire-chain model that can assume a fairly large set of different
stable firing patterns for different inputs.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-023:
----------------------------------------
The Perceptron algorithm vs. Winnow: linear vs. logarithmic mistake
bounds when few input variables are relevant
by Jyrki Kivinen, University of Helsinki, Finland
Manfred Warmuth, University of California, Santa Cruz, USA
Peter Auer, Technische Universitaet Graz, Austria
Abstract:
We give an adversary strategy that forces the Perceptron algorithm to
make $\Omega(k N)$ mistakes in learning monotone disjunctions over $N$
variables with at most $k$ literals. In contrast, Littlestone's
algorithm Winnow makes at most $O(k\log N)$ mistakes for the same
problem. Both algorithms use thresholded linear functions as their
hypotheses. However, Winnow does multiplicative updates to its weight
vector instead of the additive updates of the Perceptron algorithm.
The Perceptron algorithm is an example of {\em additive\/} algorithms,
which have the property that their weight vector is always a sum of a
fixed initial weight vector and some linear combination of already seen
instances. We show that an adversary can force any additive algorithm
to make $(N+k-1)/2$ mistakes in learning a monotone disjunction of at
most $k$ literals. Simple experiments show that for $k\ll N$, Winnow
clearly outperforms the Perceptron algorithm also on nonadversarial
random data.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-024:
----------------------------------------
Approximating Hyper-Rectangles: Learning and Pseudo-random Sets
by Peter Auer, Technische Universitaet Graz, Austria
Philip Long, National University of Singapore, Singapore
Aravind Srinivasan, National University of Singapore, Singapore
Abstract:
The PAC learning of rectangles has been studied because they have been
found experimentally to yield excellent hypotheses for several applied
learning problems. Also, pseudorandom sets for rectangles have been
actively studied recently because (i) they are a subproblem common to
the derandomization of depth-2 (DNF) circuits and derandomizing
Randomized Logspace, and (ii) they approximate the distribution of $n$
independent multivalued random variables. We present improved upper
bounds for a class of such problems of ``approximating''
high-dimensional rectangles that arise in PAC learning and
pseudorandomness.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-025:
----------------------------------------
On Learning from Multi-Instance Examples: Empirical Evaluation of a
Theoretical Approach
by Peter Auer, Technische Universitaet Graz, Austria
Abstract:
We describe a practical algorithm for learning axis-parallel
high-dimensional boxes from multi-instance examples. The first
solution to this practical learning problem arising in drug design was
given by Dietterich, Lathrop, and Lozano-Perez. A theoretical analysis
was performed by Auer, Long, Srinivasan, and Tan.
In this work we derive a competitive algorithm from theoretical
considerations which is completely different from the approach taken by
Dietterich et. al. Our algorithm uses for learning only simple
statistics of the training data and avoids potentially hard
computational problems which were solved by heuristics by Dietterich
et. al. In empirical experiments our algorithm performs quite well
although it does not reach the performance of the fine-tuned algorithm
of Dietterich et. al. We conjecture that our approach can be
fruitfully applied also to other learning problems where certain
statistical assumptions are satisfied.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-026:
----------------------------------------
Computing Functions with Spiking Neurons in Temporal Coding
by Berthold Ruf, Technische Universitaet Graz, Austria
Abstract:
For fast neural computations within the brain it is very likely that
the timing of single firing events is relevant. Recently Maass has
shown that under certain weak assumptions functions can be computed in
temporal coding by leaky integrate-and-fire neurons. Here we
demonstrate with the help of computer simulations using GENESIS that
biologically more realistic neurons can compute linear functions in a
natural and straightforward way based on the basic principles of the
construction given by Maass. One only has to assume that a neuron
receives all its inputs in a time intervall of approximately the length
of the rising segment of its excitatory postsynaptic potentials. We
also show that under certain assumptions there exists within this
construction some type of activation function being computed by such
neurons, which allows the fast computation of arbitrary continuous
bounded functions.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-027:
----------------------------------------
Hebbian Learning in Networks of Spiking Neurons Using Temporal Coding
by Berthold Ruf, Michael Schmitt, Technische Universitaet Graz, Austria
Abstract:
Computational tasks in biological systems that require short response
times can be implemented in a straightforward way by networks of
spiking neurons that encode analogue values in temporal coding. We
investigate the question how spiking neurons can learn on the basis of
differences between firing times. In particular, we provide learning
rules of the Hebbian type in terms of single spiking events of the pre-
and postsynaptic neuron and show that the weights approach some value
given by the difference between pre- and postsynaptic firing times with
arbitrary high precision. Our learning rules give rise to a
straightforward possibility for realizing very fast pattern analysis
tasks with spiking neurons.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-028:
----------------------------------------
Overview of Learning Systems produced by NeuroCOLT Partners
by NeuroCOLT Partners
Abstract:
This NeuroCOLT Technical Report documents a number of systems that
have been produced withing the NeuroCOLT partnership. It only includes
a summary of each system together with pointers to where the system is
located and more information about its performance and design can
be found.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-029:
----------------------------------------
On Bayesian Case Matching
by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri,
University of Helsinki, Finland
Abstract:
In this paper we present a new probabilist formalization of the
case-based reasoning paradigm. In contrast to earlier Bayesian
approaches, the new formalization does not need a transformation step
between the original case space and the distribution space. We
concentrate on applying this Bayesian framework to the case matching
problem, and propose a probabilistic scoring metric for this task. In
the experimental part of the paper, the Bayesian case matching score is
evaluated empirically by using publicly available real-world case
bases. The results show that when encountered with cases where some of
the feature values have been removed, a relatively small number of
remaining values is sufficient for retrieving the original case from
the case base by using the proposed measure. The experiments also show
that the approach is computationally very efficient.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-030:
----------------------------------------
Batch Classifications with Discrete Finite Mixtures
by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri,
University of Helsinki, Finland
Abstract:
In this paper we study batch classification problems where multiple
predictions can be made simultaneously, instead of performing the
classifications independently one at a time. For the predictions we
use the model family of discrete finite mixtures, where, by introducing
a hidden latent variable, we implicitly assume missing data that has to
be estimated in order to be able to construct models from sample data.
The main contribution of this paper is to demonstrate how the standard
EM algorithm can be modified for estimating both the missing latent
variable data, and the batch classification data at the same time, thus
allowing us to use the same algorithm both for constructing the models
from training data and for making predictions. In our framework the
amount of data available for making predictions is greater than with
the traditional approach, as the algorithm can also exploit the
information available in the query vectors. In the empirical part of
the paper, the results obtained by the batch classification approach
are compared to those obtained by standard (independent) predictions by
using public domain classification data sets.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-031:
----------------------------------------
Bayes Optimal Lazy Learning
by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri,
University of Helsinki, Finland
Abstract:
In this paper we present a new probabilistic formalization of the lazy
learning approach. In our Bayesian framework, moving from the
construction of an explicit hypothesis to a lazy learning approach,
where predictions are made by combining the training data at query
time, is equivalent to integrating out all the model parameters. Hence
in Bayesian Lazy Learning the predictions are made by using all the
(infinitely many) models. We present the formalization of this general
framework, and illustrate its use in practice in the case of the Naive
Bayes classifier model family. The Bayesian lazy learning approach is
validated empriically with public domain data sets and the results are
compared to the performance of the traditional, single model Naive
Bayes. The general framework described in this paper can be applied
with any formal model family, and to any discrete prediction task where
the number of simultaneously predicted attributes is small, which
includes for example all classification tasks prevalent in the machine
learning literature.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-032:
----------------------------------------
On Predictive Distributions and Bayesian Networks
by Petri Kontkanen, Petri Myllym\"aki, Tom Silander and Henry Tirri,
University of Helsinki, Finland
Abstract:
In this paper we are interested in discrete prediction problems for a
decision-theoretic setting, where the task is to compute the predictive
distribution for a finite set of possible alternatives. This question
is first addressed in a general framework, where we consider a set of
probability distributions defined by some parametric model class. The
standard Bayesian approach is to compute the posterior probability for
the model parameters, given a prior distribution and sample data, and
fix the parameters to the instantiation with the {\em maximum a
posteriori} probability. A more accurate predictive distribution can
be obtained by comupting the {\em evidence}, i.e., the integral over
all the individual parameter instantiations. As an alternative to these
two approaches, we demonstrate how to use Rissanen's new definition of
{\em stochastic complexity} for determining predictive distributions.
We then describe how these predictive inference methods can be realized
in the case of Bayesian networks. In particular, we demonstrate the
use of Jeffrey's prior as the prior distribution for computing the
evidence predictive distribution. It can be shown that the evidence
predictive distribution with Jeffrey's prior approaches the new
stochastic complexity predictive distribution in the limit with
increasing amount of sample data. For computational reasons in the
experimental part of the paper the three predictive distributions are
compared by using the tree-structures simple Naive Bayes model. The
experimentation with several public domain classification datasets
suggest that the evidence approach produces the most accurate
predictions in the log-score sense, especially with small training
sets.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-033:
----------------------------------------
Partial Occam's Razor and its Applications
by Carlos Domingo, Tatsuie Tsukiji and Osamu Watanabe,
Tokyo Institute of Technology, Japan
Abstract:
We introduce the notion of ``partial Occam algorithm''. A partial
Occam algorithm produces a succinct hypothesis that is partially
consistent with given examples, where the proportion of consistent
examples is a bit more than half. By using this new notion, we propose
one approach for obtaining a PAC learning algorithm. First, as shown
in this paper, a partial Occam algorithm is equivalent to a weak PAC
learning algorithm. Then by using boosting techniques of Schapire or
Freund, we can obtain an ordinary PAC learning algorithm from this weak
PAC learning algorithm. We demonstrate with some examples that some
improvement is possible by this approach, in particular in the
hypothesis size. First, we obtain a (non-proper) PAC learning
algorithm for $k$-DNF, which has similar sample complexity as
Littlestone's Winnow, but produces hypothesis of size polynomial in $d$
and $\log k$ for a $k$-DNF target with $n$ variables and $d$ terms
({\it Cf.}~ The hypothesis size of Winnow is $\CO(n^k)$). Next we show
that 1-decision lists of length $d$ with $n$ variables are (non-proper)
PAC learnable by using $\dsp{\CO\rpr{\frac{1}{\epsilon} \rpr{\log
\frac{1}{\delta}+16^d\log n(d+\log \log n)^2}}}$ examples within
polynomial time w.r.t.\ $n$, $2^d$, $1/\epsilon$, and $\log 1/\delta$.
Again, we obtain a sample complexity similar to Winnow for the same
problem but with a much smaller hypothesis size. We also show that our
algorithms are robust against random classification noise.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-034:
----------------------------------------
Algorithms for Learning Finite Automata from Queries: A Unified View
by Jos\'e Balc\'azar, Josep D\'iaz, Ricard Gavalda
Universitat Polit\`ecnica de Catalunya, Spain
Osamu Watanabe, Tokyo Institute of Technology, Japan
Abstract:
In this survey we compare several known variants of the algorithm for
learning deterministic finite automata via membership and equivalence
queries. We believe that our presentation makes it easier to understand
what is going on and what the differences between the various
algorithms mean. We also include the comparative analysis of the
algorithms, review some known lower bounds, prove a new one, and
discuss the question of parallelizing this sort of algorithms.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-035:
----------------------------------------
Using Fewer Examples to Simulate Equivalence Queries
by Ricard Gavalda, Universitat Polit\`ecnica de Catalunya, Spain
Abstract:
It is well known that an algorithm that learns exactly using
Equivalence queries can be transformed into a PAC algorithm that asks
for random labelled examples. The first transformation due to Angluin
(1988) uses a number of examples quadratic in the number of queries.
Later, Littlestone (1989) and Schuurmans and Greiner (1995) gave
transformations using linearly many examples. We present here another
analysis of Littlestone's transformation which is both simpler and
gives better leading constants. Our constants are still worse than
Schuurmans and Greiner's, but while ours is a worst-case bound on the
number of examples to achieve PAC learning, theirs is only an expected
one.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-036:
----------------------------------------
A Dichotomy Theorem for Learning Quantified Boolean Formulas
by Victor Dalmau, Universitat Polit\`ecnica de Catalunya, Spain
Abstract:
We consider the following classes of quantified boolean formulas. Fix a
finite set of basic boolean functions. Take conjunctions of these basic
functions applied to variables and constants in arbitrary way. Finally
quantify existentially or universally some of the variables. We prove
the following {\em dichotomy theorem}: For any set of basic boolean
functions, the resulting set of formulas is either polynomially
learnable from equivalence queries alone or else it is not
PAC-predictable even with membership queries under cryptographic
assumptions. Furthermore we identify precisely which sets of basic
functions are in which of the two cases.
----------------------------------------
NeuroCOLT Technical Report NC-TR-97-037:
----------------------------------------
Discontinuities in Recurrent Neural Networks
by Ricard Gavald\`a, Universitat Polit\`ecnica de Catalunya, Spain
Hava Siegelmann, Technion, Israel
Abstract:
This paper studies the computational power of various discontinuous
real computational models that are based on the classical analog
recurrent neural network (ARNN). This ARNN consists of finite number
of neurons; each neuron computes a polynomial net-function and a
sigmoid-like continuous activation-function.
The authors introduce ``arithmetic networks'' as ARNN augmented with a
few simple discontinuous (eg., threshold) neurons. They argue that
even with weights restricted to polynomial time computable reals,
arithmetic networks are able to compute arbitrary complex recursive
functions. A proof is provided to show that arithmetic networks are
computationally equivalent to networks comprised of neurons that
compute divisions and polynomials net-functions inside sigmoid-like
continuous activation functions. Further, the authors prove that these
arithmetic networks are equivalent to the Blum-Shub-Smale (BSS) model,
when the latter is restricted to a bounded number of registers.
With regards to implementation on digital computers, the authors
demonstrate that arithmetic networks with rational weights require
exponential precision; but even with very simple real weights
arithmetic networks are not subject to precision bounds. As such, they
can not be approximated on digital machines. This is in contrast with
the ARNN that are known to demand only precision that is linear in the
computation time.
When complex periodic discontinuous neurons (eg. sine, tangent,
fractional parts) are augmented to arithmetic networks, the resulting
networks are computationally equivalent to a massively parallel
machine. Thus, this highly discontinuous network can solve the
presumably intractable class of PSPACE-complete problems in polynomial
time.
--------------------------------------------------------------------
***************** ACCESS INSTRUCTIONS ******************
The Report NC-TR-97-001 can be accessed and printed as follows
% ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1)
Name: anonymous
password: your full email address
ftp> cd pub/neurocolt/tech_reports
ftp> binary
ftp> get nc-tr-97-001.ps.Z
ftp> bye
% zcat nc-tr-97-001.ps.Z | lpr -l
Similarly for the other technical reports.
Uncompressed versions of the postscript files have also been
left for anyone not having an uncompress facility.
In some cases there are two files available, for example,
nc-tr-97-002-title.ps.Z
nc-tr-97-002-body.ps.Z
The first contains the title page while the second contains the body
of the report. The single command,
ftp> mget nc-tr-97-002*
will prompt you for the files you require.
A full list of the currently available Technical Reports in the
Series is held in a file `abstracts' in the same directory.
The files may also be accessed via WWW starting from the NeuroCOLT
homepage:
http://www.dcs.rhbnc.ac.uk/research/compint/neurocolt
or directly to the archive:
ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports
Best wishes
John Shawe-Taylor
From krogh@frame.cbs.dtu.dk Thu Apr 3 14:59:58 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id OAA09627 for ; Thu, 3 Apr 1997 14:59:44 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id OAA22944 for ; Thu, 3 Apr 1997 14:59:41 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa13375;
3 Apr 97 4:31:11 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa13373;
3 Apr 97 4:20:57 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa13161;
3 Apr 97 4:20:27 EST
Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa29413; 3 Apr 97 2:24:34 EST
Received: from [130.225.67.207] by RI.CMU.EDU id aa22118; 3 Apr 97 2:24:01 EST
Received: by frame.cbs.dtu.dk (931110.SGI/930416.SGI.AUTO)
for Connectionists@cs.cmu.edu id AA24093; Thu, 3 Apr 97 09:25:04 +0200
From: Anders Krogh
Message-Id: <9704030924.ZM24091@frame.cbs.dtu.dk>
Date: Thu, 3 Apr 1997 09:24:46 -0600
X-Mailer: Z-Mail (3.1.0 22feb94 MediaMail)
To: Connectionists@cs.cmu.edu
Subject: Papers on ensemble learning
Cc: pkso@holyrood.ed.ac.uk
Content-Type: text/plain; charset=us-ascii
Mime-Version: 1.0
Dear connectionists,
the following paper is now available from our web page
(http://www.ph.ed.ac.uk/~pkso/papers/EnsemblePREVII.ps.gz) :
STATISTICAL MECHANICS OF ENSEMBLE LEARNING
Anders Krogh and Peter Sollich
(Physical Review E, 55:811-825, 1997)
Abstract
Within the context of learning a rule from examples, we study the
general characteristics of learning with ensembles. The generalization
performance achieved by a simple model ensemble of linear students is
calculated exactly in the thermodynamic limit of a large number of input
components and shows a surprisingly rich behavior. Our main findings
are the following. For learning in large ensembles, it is advantageous
to use underregularized students, which actually overfit the training
data. Globally optimal generalization performance can be obtained by
choosing the training set sizes of the students optimally. For smaller
ensembles, optimization of the ensemble weights can yield significant
improvements in ensemble generalization performance, in particular if
the individual students are subject to noise in the training process.
Choosing students with a wide range of regularization parameters makes
this improvement robust against changes in the unknown level of
corruption of the training data.
An abbreviated version of this paper appeared in NIPS 8 and is also
available (http://www.ph.ed.ac.uk/~pkso/papers/EnsembleNIPSVI.ps.gz).
Both papers are also available from http://www.cbs.dtu.dk/krogh/refs.html.
Further papers of potential interest to readers of connectionists can be
found on our home pages:
* Peter Sollich (http://www.ph.ed.ac.uk/~pkso/publications): learning
from queries, online learning, finite size effects in neural networks,
ensemble learning
* Anders Krogh (http://www.cbs.dtu.dk/krogh/): Most current work on
using hidden Markov models in `computational biology.'
All comments and suggestions are welcome -
Anders Krogh and Peter Sollich
--------------------------------------------------------------------------
Peter Sollich Department of Physics
University of Edinburgh
e-mail: P.Sollich@ed.ac.uk Kings Buildings
phone: +44 - (0)131 - 650 5293 Mayfield Road
fax: +44 - (0)131 - 650 5212 Edinburgh EH9 3JZ, U.K.
--------------------------------------------------------------------------
Anders Krogh
Center for Biological Sequence Analysis (CBS)
Technical University of Denmark
Building 206
DK-2800 Lyngby
DENMARK
Phone: +45 4525 2470
Fax: +45 4593 4808
E-mail: krogh@cbs.dtu.dk
_____________________________________________
From nburgess@lbs.ac.uk Thu Apr 3 14:59:59 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id OAA09629 for ; Thu, 3 Apr 1997 14:59:46 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id OAA22946 for ; Thu, 3 Apr 1997 14:59:44 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab13375;
3 Apr 97 4:31:49 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id ab13373;
3 Apr 97 4:20:59 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa13166;
3 Apr 97 4:20:36 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa01177; 3 Apr 97 4:19:21 EST
Received: from [163.119.254.19] by CS.CMU.EDU id aa02447; 3 Apr 97 4:18:56 EST
Received: from neptune.lbs.ac.uk (neptune [163.119.254.10]) by galileo.lbs.ac.uk (8.7.5/8.7.3) with ESMTP id LAA06318 for ; Thu, 3 Apr 1997 11:08:43 +0100
Received: from NEPTUNE/SpoolDir by neptune.lbs.ac.uk (Mercury 1.21);
3 Apr 97 09:16:47 GMT+0
Received: from SpoolDir by NEPTUNE (Mercury 1.21); 3 Apr 97 09:16:41 GMT+0
From: Neil Burgess
Organization: London Business School
To: Connectionists@cs.cmu.edu
Date: Thu, 3 Apr 1997 09:16:34 BST
Subject: Pre-prints available - Neural Networks in the Capital Markets
Reply-to: boguntula@lbs.ac.uk
Priority: normal
X-mailer: Pegasus Mail for Windows (v2.52)
Message-ID: <9B468B447F1@neptune.lbs.ac.uk>
Neural Networks in the Capital Markets:
The following NNCM-96 pre-prints are now available on request.
Please send your postal address to: boguntula@lbs.ac.uk
=====================================================
ASSET ALLOCATION ACROSS EUROPEAN EQUITY INDICES USING
A PORTFOLIO OF DYNAMIC COINTEGRATION MODELS
A. N. BURGESS
Department of Decision Science
London Business School
Regents Park, London, NW1 4SA, UK
In modelling financial time-series, the model selection process is
complicated by the presence of noise and possible structural
non-stationarity. Additionally the near-efficiency of financial
markets combined with the flexibility of advanced modelling techniques
creates a significant risk of "data-snooping". These factors combine
to make trading a single model a very risky proposition, particularly
in a situation which allows for high leverage, such as futures
trading. We believe that the risks inherent in relying on a given
model can be reduced by combining a whole set of models and, to this
end, describe a population-based methodology which involves building a
portfolio of complementary models. We describe an application of the
technique to the problem of modelling a set of European equity indices
using a portfolio of cointegration-based models.
=====================================================
FORECASTING VOLATILITY MISPRICING
P. J. BOLLAND & A. N. BURGESS
Department of Decision Science
London Business School
Regents Park, London, NW1 4SA, UK
A simple strategy is employed to exploit volatility mispricing based
on discrepancies between implied and actual market volatility. The
strategy uses forward and Log contracts to either buy or sell
volatility depending on whether volatility is over or under priced.
As expected, buying volatility gives small profits on average but with
occasional large losses in adverse market conditions. In this paper
multivariate non-linear methods are used to forecast the returns of a
Log contract portfolio. The explanatory power of implied volatility
and the volatility term structure from several indices (FTSE, CAC,
DAX) are investigated. Neural network methodologies are benchmarked
against linear regression. The use of both multivariate data and
non-linear techniques are shown to significantly improve the accuracy
of predictions.
Keywords: Options, Volatility Mispricing, Log contract, Volatility
Term Structure,
=====================================================
From w.penny@ic.ac.uk Thu Apr 3 23:10:29 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id XAA13218 for ; Thu, 3 Apr 1997 23:10:23 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id XAA00216 for ; Thu, 3 Apr 1997 23:10:21 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa14184;
3 Apr 97 18:03:15 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa14182;
3 Apr 97 17:43:05 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa13595;
3 Apr 97 17:42:34 EST
Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa06063; 3 Apr 97 9:18:04 EST
Received: from romeo.ic.ac.uk by RI.CMU.EDU id aa23778; 3 Apr 97 9:17:28 EST
Received: from judy.ic.ac.uk [155.198.5.5]
by romeo.ic.ac.uk with esmtp (Exim 0.57 #1)
id 0wCnJq-0007Z9-00; Thu, 3 Apr 1997 15:16:50 +0100
Received: from rosalind.ee.ic.ac.uk (rosalind.ee.ic.ac.uk [155.198.139.10]) by judy.ic.ac.uk (8.7.5/8.7.5) with SMTP id PAA16203; Thu, 3 Apr 1997 15:16:21 +0100 (BST)
Received: from albert.ee.ic.ac.uk by rosalind.ee.ic.ac.uk (4.1/4.1)
id AA03241; Thu, 3 Apr 97 15:08:27 BST
From: w.penny@ic.ac.uk
Date: Thu, 3 Apr 1997 15:22:20 +0100
Message-Id: <22983.199704031422@albert.ee.ic.ac.uk>
To: Connectionists@cs.cmu.edu
Subject: Research jobs in neural nets and pattern recognition
Cc: w.penny@ic.ac.uk, s.j.roberts@ic.ac.uk, rickman@rrl.co.uk
X-Sun-Charset: ISO-8859-1
THREE POST-DOCTORAL RESEARCH POSITIONS
IN PATTERN RECOGNITION / NEURAL NETWORKS RESEARCH
Three post-doctoral research positions are available within the Neural
Systems Section of the Department of Electrical & Electronic Engineering to
work on the theory and application of advanced pattern recognition
techniques, in particular the use of Bayesian methods and neural networks.
Two positions are funded for two years and the third nominally for three
years with a yearly evaluation. All projects involve research in
statistical pattern recognition with applications in the biomedical field.
Experience in pattern recognition and Bayesian statistics would be an
advantage. A good understanding of data processing (especially signal
processing) techniques is desired as is experience of UNIX, C and Matlab.
The positions are funded by the Jefferiss Research Trust, the European
Commission and British Aerospace plc respectively. The salary scale will be
RA1A , GBP 14,732 - 22,143 per annum (exclusive of London Allowance of GBP
2,134) depending on age and experience. Further information may be
obtained from http://www.ee.ic.ac.uk/research/neural/positions.html or via
e-mail to Dr Stephen Roberts (s.j.roberts@ic.ac.uk). The closing date for
applications is April 11th 1997.
In recent years, great interest has developed in the use of non-classical
methods for statistical analysis of data as part of a general increase
towards the use of artificial intelligence methods. One genre which has
shown itself to be particularly suitable is that of connectionist models, a
subset of which are referred to as a artificial neural networks (ANNs).
Classical statistical methods rely upon the use of simple models, such as
linear or logistic regression, in order to 'learn' relationships between
variables and outcomes. ANNs offer a far more flexible model set, indeed it
has been shown that they have the property of universal approximation so
they are able, in principle, to estimate any set of arbitrary relationships
between variables. Furthermore, they may model non-linear coupling between
sets of variables. Part of the momentum of the recent development of ANNs
for pattern recognition, regression and estimation problems must be
attributed to the manner in which ANNs conform to many of the traditional
statistical approaches, i.e. they may estimate Bayesian probablilities in
the case of classification and conditional averages in the case of
regression.
1) The use of Neural Networks to Predict the Development and Progression of
Kaposi's Sarcoma (KS).
This is a joint project funded by the Jefferiss Research Trust between the
Department of Electrical and Electronic Engineering, Imperial College of
Science, Technology & Medicine and the Department of Genito-urinary
Medicine, St. Mary's Hospital.
Kaposi's sarcoma (KS) is a vascular tumour, which is more common and often
aggressive in patients with underlying immunosuppression (post-transplant
KS and AIDS-associated KS). KS was first described by the Hungarian
pathologist Moritz Kaposi in 1872, yet still remains something of a
clinical enigma, being an unusual tumour of unknown origin.
The aim of this research is to determine factors that influence the
variable progression rate of KS in HIV infected individuals. There is
currently no means of predicting which patients will develop KS and no
understanding of the relationship between the forms of the disease. The aim
of the project is to carry out multi-variable analyses in order to define
clinical end-points and provide guidelines for better patient management.
A number of variables will be available to the system. The reliability and
utility of each with regard to the prediction of patient outcome, however,
is generally unknown. Classical regression analysis offers some powerful
methods of selection and ranking within a subset of features or variables.
Whilst such methods should be used for completeness and comparison, it is
noted that recent developments in Bayesian learning theory have offered the
possibility to assess the utility of variables from within the ANN
structure. Each input variable has a separate weighting factor, or in
Bayesian terminology, a hyper-prior, associated with it. This technique
has become known as automatic relevance determination or ARD. Such an
assessment is devoid of the strong assumptions of independence and
linearity of most of the classical regression methods.
It is feasible for an ANN to produce not only a set of output variables
(predictions or classifications, for example) but also an associated set of
confidence or validation measures (describing the probable error on each
output). This enables the tracking of predictions of future events in a
more robust framework and furthermore allows for the accurate fusion of
information from more than one source and the incorporation of temporal
information, i.e. poor quality information from the present time may be
suppressed in favour of more reliable information from past or future as it
becomes available.
If we may regard the system as aiming to produce a probability distribution
in some 'outcome space', then several possible approaches to analysis are
made available. As temporal information is retained (i.e. outcomes are
based upon the entire course of the patient's history, not just present
information) we may seek information regarding the effect of each piece of
information (test result or partial diagnosis) on the probability
distribution in the 'outcome space'. Two pieces of information may be
obtained from this approach. How important a partial decision or test
result is to the probability of certain outcomes and how important it is to
changing the uncertainty we have in the outcome results. Clearly, the goal
will be to indicate tests and/or procedures which not only increase the
survivability probabilities but also make the estimated outcomes less
variant, so we have more confidence in the predictions (this means not only
increasing the height of a favourable node in the posterior probability
space, but also attempting to reduce the variance of the distribution). In
order to accommodate for multiple output hypotheses we propose to utilise a
procedure similar to that detailed in (Bishop 1995) whereby the output
distribution is modelled multi- modally. This has the added benefit that
individual modes (possible outcomes) may be tracked separately. This
representation is also similar to that taken in a mixture of experts
approach.
REFERENCES
1. Bishop CM. Neural Networks for Pattern Recognition. Oxford University
Press, Oxford, 1995.
2. Ripley BD. Pattern Recognition and Neural Networks. Cambridge University
Press, Cambridge, 1996.
3. Roberts SJ and Penny W. Novelty, Confidence and Errors in Connectionist
Systems. Proceedings of IEE colloquium on fault detection and intelligent
sensors, IEE, September 1996.
4. Penny W and Roberts SJ. Neural Networks with Error Bars. Departmental
report, also submitted to IEEE transactions on neural networks, February
1997, available from http://www.ee.ic.ac.uk/staff/hp/sroberts.html
2) SIESTA (EU funded project)
Siesta is a an EU funded project which involves Imperial College and 10
other European partners. The aim of the project is to define and produce a
system which is capable of continuous evaluation of the state of the brain
during the sleep-wake cycle. Such an automated system is of enormous value
in the clinical field and the research into multi-channel signal
processing, fusion and pattern recognition form a challenge to the most
modern techniques. The state of the brain will, primarily, be monitored
via its electrical activity (the EEG).
One of the most well-known approaches from literature to achieve a
continuous description of EEG state is the system developed by Roberts &
Tarassenko (1992a, 1992b). This approach will be used as a general basis
for the research in SIESTA. Roberts & Tarassenko (henceforth, R&T') used
a self-organizing feature map (SOM) to perform unsupervised topographic
mapping of feature vectors consisting of 10 coefficients of a Kalman filter
algorithm applied to the raw EEG. This self- organizing network discovered
eight distinct clusters in which the brain state remained preferentially.
>From the general trajectories between these halting states, R&T discovered
three distinct patterns which roughly corresponded to the global brain
states W (wakefulness), S (slow wave or deep sleep) and R (REM sleep). The
outputs from the self-organizing feature map were subsequently mapped via a
Radial Basis Function (RBF) classifier onto three outputs, trained by
sections of data on which experts agreed upon the stage W, S, or R. For
each input, the resulting network produced probabilities for the three
global stages, describing intermediate stages as a combination of three
'mixing fractions' .
This general approach, yielding a novel way of describing brain state while
exploiting some of experts' knowledge in a partly supervised method, will
be adopted and extended in the following ways
Many features extracted from the signals will be considered.
Instead of the 2-dimensional feature map used by R&T, alternative
approaches will be investigated. It has been shown that combinations of
other clustering and mapping methods can outperform SOMs. Moreover, since
topographic mapping is only exploited for visualization, the general
approach can be based on more advanced clustering techniques (e.g.
techniques for non-Gaussian clustering (Roberts 1997) or Bayesian-inspired
methods).
in order to cope with the large number of input features to be
investigated active feature selection methods will be applied.
techniques for intelligent sensor fusion will be investigated. When
multiple sources are combined to lead to classification results, it is not
trivial to decide which are the most relevant sources at any given time, or
what should happen when sources fail to provide input (e.g. because an
electrode is faulty). Approaches based on the computation of running error
measures can be employed here.
The Imperial College group will form be a leading centre in the theory
subgroup of the project. We will be active in the researching of
Mixture density networks and mixtures of experts
Model estimation and pre-processing
Active sensor fusion
Active feature and data selection
Unsupervised data partitioning methods (clustering)
Model comparison and validation
References
1. S.J. Roberts, Parametric and Non-parametric Unsupervised Cluster
Analysis. Pattern Recognition, 30 (2) ,1997.
2. S.J. Roberts and L. Tarassenko, The Analysis of the Sleep EEG using a
Multi- layer Neural Network with Spatial Organisation. IEE Proceedings Part
F, 139(6), 420-425, 1992a.
3. S.J. Roberts and L. Tarassenko, New Method of Automated Sleep
Quantification. Medical and Biological Engineering and Computing, 30(5),
509-517, 1992b.
3) Assessment of cortical vigilance
This project, funded by British Aerospaces Sowerby Research Centre at
Bristol, aims to assess and predict lapses in vigilance in the human brain.
Recordings of the brains electrical activity are to be recorded and
analysed.
The utility of a device or system which may monitor an individual's level
of vigilance is clear in a range of safety-critical environments. We
propose that such utility would be enhanced by a system which, as well as
monitoring the present state of vigilance, made a prediction as to the
likely evolution of vigilance in the near future. To perform both these
tasks, i.e. a static pattern assessment and a dynamic tracking and
prediction, sophisticated methods of information extraction, sensor fusion
and classification/regression must be employed. Over the last decade the
theory of artificial neural' networks has been pitched within the
framework of advanced statistical decision theory and it is within this
framework which we intend to work.
The aim of the project is to work towards a practical real-time system.
The latter should be minimally intrusive and should make predictions of
future vigilance states. Where appropriate, therefore, the investigation
will assess each technique in the developing system with a view to its
implementation in a real-time environment.
The project will involve research into :
New methods of signal complexity and synchronisation estimation
Information flow estimation in multi-channel environments
Active sensor fusion
Prediction and classification
Error estimation
State transition detection and state sequence modelling
Reference
Makeig, S. and Jung, T-P. and Sejnowski, T. (1996), Using feedforward
neural networks to monitor alertness from changes in EEG correlation and
coherence, Advances in Neural Information Processing Systems (NIPS), MIT
Press, Cambridge, MA.
From biehl@physik.uni-wuerzburg.de Fri Apr 4 12:14:24 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id MAA19174 for ; Fri, 4 Apr 1997 12:14:19 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id MAA09352 for ; Fri, 4 Apr 1997 12:14:18 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa15180;
4 Apr 97 3:52:36 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa15178;
4 Apr 97 3:43:20 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa13906;
4 Apr 97 3:42:32 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa25591; 4 Apr 97 3:32:30 EST
Received: from wptx01.physik.uni-wuerzburg.de by EDRC.CMU.EDU id aa24801;
4 Apr 97 3:31:51 EST
Received: (from smap@localhost) by wptx01.physik.uni-wuerzburg.de (8.7.5/8.7.3/TPhys_S) id KAA26639 for ; Fri, 4 Apr 1997 10:31:43 +0200 (MESZ)
From: Michael Biehl
Received: from wptx08.physik.uni-wuerzburg.de(132.187.40.8) by wptx01.physik.uni-wuerzburg.de (smap V1.3) id sma026629 for ; Fri Apr 4 10:31:37 1997
Received: (from biehl@localhost) by wptx08.physik.uni-wuerzburg.de (8.7.5/8.7.3/TPhys_C) id KAA29194 for Connectionists@cs.cmu.edu; Fri, 4 Apr 1997 10:31:36 +0200 (MESZ)
Message-Id: <199704040831.KAA29194@wptx08.physik.uni-wuerzburg.de>
Subject: preprint on on-line unsupervised learning
To: Connectionists
Date: Fri, 4 Apr 1997 10:31:35 +0200 (MESZ)
X-Mailer: ELM [version 2.4 PL24 ME8b]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
FTP-host: ftp.physik.uni-wuerzburg.de
FTP-filename: /pub/preprint/1997/WUE-ITP-97-003.ps.gz
The following manuscript is now available via anonymous ftp
(see below for the retrieval procedure), or, alternatively from
http://www.physik.uni-wuerzburg.de/~biehl
------------------------------------------------------------------
"Specialization processes in on-line unsupervised learning"
Michael Biehl, Ansgar Freking, Georg Reents, and Enno Schl"osser
Contribution to the
Minerva Workshop on Mesoscopics, Fractals and Neural Networks
Eilat, Israel, March 1997
Ref: WUE-ITP-97-003
Abstract
>From the recent analysis of supervised learning by on-line gradient
descent in multilayered neural networks it is known that the necessary
process of student specialization can be delayed significantly. We
demonstrate that this phenomenon also occurs in various models of
unsupervised learning. A solvable model of competitive learning is
presented, which identifies prototype vectors suitable for the repre-
sentation of high--dimensional data. The specific case of two overlapping
clusters of data and a matching number of prototype vectors exhibits non-
trivial behavior like almost stationary plateau configurations. As a
second example scenario we investigate the application of Sanger's
algorithm for principal component analysis in the presence of two relevant
directions in input space. Here, the fast learning of the first principal
component may lead to an almost complete loss of initial knowledge about
the second one.
---------------------------------------------------------------------
Retrieval procedure:
unix> ftp ftp.physik.uni-wuerzburg.de
Name: anonymous Password: {your e-mail address}
ftp> cd pub/preprint/1997
ftp> binary
ftp> get WUE-ITP-97-003.ps.gz (*)
ftp> quit
unix> gunzip WUE-ITP-97-003.ps.gz
e.g. unix> lp WUE-ITP-97-003.ps [10 pages]
(*) can be replaced by "get WUE-ITP-97-003.ps". The file will then
be uncompressed before transmission (slow!).
_____________________________________________________________________
--
Michael Biehl
Institut fuer Theoretische Physik
Julius-Maximilians-Universitaet Wuerzburg
Am Hubland
D-97074 Wuerzburg
email: biehl@physik.uni-wuerzburg.de
homepage: http://www.physik.uni-wuerzburg.de/~biehl
Tel.: (+49) (0)931 888 5865
" " " 5131
Fax : (+49) (0)931 888 5141
From sami@guillotin.hut.fi Fri Apr 4 19:53:53 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id TAA21423 for ; Fri, 4 Apr 1997 19:53:43 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id TAA16858 for ; Fri, 4 Apr 1997 19:53:42 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa15965;
4 Apr 97 15:10:00 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa15938;
4 Apr 97 14:49:13 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14312;
4 Apr 97 14:48:27 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa29770; 4 Apr 97 7:57:04 EST
Received: from guillotin.hut.fi by EDRC.CMU.EDU id aa25647; 4 Apr 97 7:56:58 EST
Received: (from sami@localhost) by guillotin.hut.fi (950413.SGI.8.6.12/950213.SGI.AUTOCF) id OAA01930; Fri, 4 Apr 1997 14:56:49 +0200
Date: Fri, 4 Apr 1997 14:56:49 +0200
Message-Id: <199704041256.OAA01930@guillotin.hut.fi>
From: Sami Kaski
To: connectionists@cs.cmu.edu
Subject: Thesis on data exploration with SOMs available
Reply-to: sami.kaski@hut.fi
The following Dr.Tech. thesis is available at
http://nucleus.hut.fi/~sami/thesis/thesis.html (html-version)
http://nucleus.hut.fi/~sami/thesis.ps.gz (compressed postscript, 300K)
http://nucleus.hut.fi/~sami/thesis.ps (postscript, 2M)
The articles that belong to the thesis can be accessed through the page
http://nucleus.hut.fi/~sami/thesis/node3.html
---------------------------------------------------------------
Data Exploration Using Self-Organizing Maps
Samuel Kaski
Helsinki University of Technology
Neural Networks Research Centre
P.O.Box 2200 (Rakentajanaukio 2C)
FIN-02015 HUT, Finland
Finding structures in vast multidimensional data sets, be they
measurement data, statistics, or textual documents, is difficult and
time-consuming. Interesting, novel relations between the data items
may be hidden in the data. The self-organizing map (SOM) algorithm of
Kohonen can be used to aid the exploration: the structures in the data
sets can be illustrated on special map displays.
In this work, the methodology of using SOMs for exploratory data
analysis or data mining is reviewed and developed further. The
properties of the maps are compared with the properties of related
methods intended for visualizing high-dimensional multivariate data
sets. In a set of case studies the SOM algorithm is applied to
analyzing electroencephalograms, to illustrating structures of the
standard of living in the world, and to organizing full-text document
collections.
Measures are proposed for evaluating the quality of different types of
maps in representing a given data set, and for measuring the
robustness of the illustrations the maps produce. The same measures
may also be used for comparing the knowledge that different maps
represent.
Feature extraction must in general be tailored to the application, as
is done in the case studies. There exists, however, an algorithm
called the adaptive-subspace self-organizing map, recently developed
by Kohonen, which may be of help. It extracts invariant features
automatically from a data set. The algorithm is here characterized in
terms of an objective function, and demonstrated to be able to
identify input patterns subject to different transformations.
Moreover, it could also aid in feature exploration: the kernels that
the algorithm creates to achieve invariance can be illustrated on map
displays similar to those that are used for illustrating the data
sets.
From movellan@ergo.ucsd.edu Fri Apr 4 19:53:54 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id TAA21425 for ; Fri, 4 Apr 1997 19:53:44 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id TAA16860 for ; Fri, 4 Apr 1997 19:53:43 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa16188;
4 Apr 97 18:07:13 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa16184;
4 Apr 97 17:48:15 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14441;
4 Apr 97 17:47:43 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa11512; 4 Apr 97 17:36:17 EST
Received: from ergo.ucsd.edu by EDRC.CMU.EDU id aa29288; 4 Apr 97 17:34:58 EST
Received: (from movellan@localhost)
by ergo.ucsd.edu (8.8.4/8.8.4)
id OAA31931; Fri, 4 Apr 1997 14:27:09 -0800
Date: Fri, 4 Apr 1997 14:27:09 -0800
Message-Id: <199704042227.OAA31931@ergo.ucsd.edu>
From: "Javier R. Movellan"
To: connectionists@cs.cmu.edu
Subject: UCSD Cogsci TR
The following technical report is available online at
http://cogsci.ucsd.edu (follow links to Tech Reports & Software )
Physical copies are also available (see the site for information).
Analysis of Direction Selectivity Arising From
Recurrent Cortical Interactions.
Paul Mineiro and David Zipser
UCSD Cogsci TR.97.03
The relative contributions of feedforward and recurrent connectivity to the
direction selective responses of cells in layer IVB of primary visual cortex
is currently the subject of debate in the neuroscience community. Recently
biophysically detailed simulations have shown realistic direction selective
responses can be achieved via recurrent cortical interactions between cells
with non-direction selective feedforward input
\cite{Koch:DS,Maex:SpikeMotion}. Unfortunately the complexity of these
models, while desirable for detailed comparison with biology, are difficult
to analyze mathematically. In this paper a relatively simple cortical
dynamical model is used to analyze the emergence of direction selective
responses via recurrent interactions. Comparison between a model based on
our analysis and physiological data is presented. The approach also allows
analysis of the recurrently propagated signal, revealing the predictive
nature of the implementation.
From riegler@ifi.unizh.ch Sat Apr 5 04:31:16 1997
Received: from lucy.cs.wisc.edu (lucy.cs.wisc.edu [128.105.2.11]) by sea.cs.wisc.edu (8.6.12/8.6.12) with ESMTP id EAA22494 for ; Sat, 5 Apr 1997 04:31:11 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU (TELNET-1.SRV.CS.CMU.EDU [128.2.254.108]) by lucy.cs.wisc.edu (8.7.6/8.7.3) with SMTP id EAA20917 for ; Sat, 5 Apr 1997 04:31:09 -0600 (CST)
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab15965;
4 Apr 97 15:11:48 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa15945;
4 Apr 97 14:51:37 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14322;
4 Apr 97 14:51:14 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa00708; 4 Apr 97 8:54:30 EST
Received: from josef.ifi.unizh.ch by CS.CMU.EDU id aa09877; 4 Apr 97 8:54:03 EST
Received: from mac77.ifi.unizh.ch by josef.ifi.unizh.ch with SMTP (PP)
id <23501-0@josef.ifi.unizh.ch>; Fri, 4 Apr 1997 15:51:08 +0200
X-Sender: riegler@josef.ifi.unizh.ch
Message-Id:
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Fri, 4 Apr 1997 15:51:10 +0200
From: Alex Riegler
Subject: NTCS-97 Call For Participation
CALL FOR PARTICIPATION
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
International Workshop
N E W T R E N D S I N C O G N I T I V E S C I E N C E
NTCS '97
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
"Does Representation need Reality?"
Perspectives from Cognitive Science, Neuroscience,
Epistemology, and Artificial Life
Vienna, Austria, May 14 - 16, 1997
with plenary talks by:
Larry Cauller, Georg Dorffner, Ernst von Glasersfeld,
Stevan Harnad, Wolf Singer, and Sverre Sjoelander
organized by the Austrian Society of Cognitive Science (ASoCS)
===========================================================================
Latest information can be retrieved from the conference WWW-page
===========================================================================
P u r p o s e
___________________________________________________________________________
The goal of this single-track conference is to investigate and discuss new
approaches and movements in cognitive science in a workshop-like
atmosphere. Among the topics which seem to have emerged in the last years
are: embodiment of knowledge, system theoretic and computational
neuroscience approaches to cognition, dynamics in recurrent neural
architectures, evolutionary and artificial life approaches to cognition,
and (epistemological) implications for perception and representation,
constructivist concepts and the problem of knowledge representation,
autopoiesis, implications for epistemology and philosophy (of science).
Evidence for a failure of the traditional understanding of neural
representation converges from several fields. Neuroscientific results in
the last decade have shown that single cell representations with
hierarchical processing towards representing units seems not the way the
cortex represents environmental entities. Instead, distributed cell
ensemble coding has become a popular concept for representation, both in
computational and in empirical neuroscience. However, new problems arise
from the new concepts. The problem of binding the distributed parts into a
uniform percept can be "solved" by introducing synchronization of the
member neurons. A deeper (epistemological) problem, however, is created by
recurrent architectures within ensembles generating an internal dynamics in
the network. The cortical response to an environmental stimulus is no
longer dominated by stimulus properties themselves, but to a considerable
degree by the internal state of the network. Thus, a clear and stable
reference between a representational state (e.g. in a neuron, a Hebbian
ensemble, an activation state, etc.) and the environmental state becomes
questionable. Already learned experiences and expectancies might have an
impact on the neural activity which is as strong as the stimulus itself.
Since these internally stored experiences are constantly changing, the
notion of (fixed) representations is challenged. At this point, system
theory and constructivism, both investigating the interaction between
environment and organism at an abstract level, come into the scene and turn
out to provide helpful epistemological concepts. The goal of this
conference is to discuss these phenomena and their implications for the
understanding of representation, semantics, language, cognitive science,
and artificial life.
Contrary to many conferences in this field, the focus is on
interdisciplinary cooperation and on conceptual and epistemological
questions, rather than on technical details. We are trying to achieve this
by giving more room to discussion and interaction between the participants
(e.g., invited comments on papers, distribution of papers to the
participants before the conference, etc.). According to the
interdisciplinary character of cognitive science, we welcome papers/talks
from the fields of artificial life, empirical, cognitive, and computational
neuroscience, philosophy (of science), epistemology, anthropology, computer
science, psychology, and linguistics.
T a l k s
___________________________________________________________________________
NOTE: Names with * are invited speakers.
Constructivism
Ernst von Glasersfeld* Piaget's legacy: Cognition as adaptive activity
Sverre Sjoelander* How animals handle reality: the adaptive aspect
of representation
Annika Wallin Is there a way to distinguish representation from
Perception...
Tom Routen Habitus and Animats
General Epistemology and Methodology
W.F.G. Haselager Is cognitive science advancing towards
behaviorism?
Michael Pauen Reality and representation
William Robinson Representation and cognitive explanation
Matthias Scheutz The ontological status of representations
Anthony Chemero Two types of anti-representationism: a taxonomy
Georg Schwarz Can representation get reality?
Neuroscience
Larry Cauller* NeuroInteractivism: Explaining emergence without
representation
Wolf Singer* The observer in the brain
Steven Bressler The dynamic manifestation of cognitive structures
in the cerebral cortex
Erich Harth Sketchpads in and beyond the brain
Marius Usher Active Neural representations: neurophysiological
data and its implications
Symbol Grounding and Communication
Georg Dorffner* The connectionist route to embodiment
and dynamicism
Stevan Harnad* Keeping a grip on the real/virtual distinction
in this representationalist age
Mark Wexler Must mental representation be internal?
Tom Ziemke Rethinking Grounding
Christian Balkenius Explorations in synthetic pragmatics
Horst Hendriks-Jansen Does natural cognition need internal knowledge
structures?
Nathan Chandler On the importance of reality in representations
Peter Gaerdenfors Does semantics need reality?
P o s t e r s
___________________________________________________________________________
Chris Browne Iconic Learning and Epistemology
Mark Claessen RabbitWorld: the concept of space can be learned
Valentin Constantinescu Interaction between perception & expectancy...
Andrew Coward Unguided categorization, direct and symbolic
representation, and evolution of cognition...
David Davenport PAL: A constructivist model of cognitive activity
Karl Diller Representation and reality: where are the rules of
grammar
Richard Eiser Representation and the social reality
Robert French When coffe cups are like old elephants
Maurita Harney Representation and its metaphysics
Daniel Hutto Cognition without representation?
Amy Ione Symbolic creation and re-representation of reality
Sydney Lamb Top-Down Modeling, Bottom-up Learning
Michael Luntley Real Representations
Ralf Moeller Perception through anticipation
Ken Mogi Response selectivity, neuron doctrine,
and Mach's principle in perception
Alfredo Pereira The term "representation" in cognitive
neuroscience
Michael Ramscar Judgement of association: problems with cognitive
theories of analogy
Hanna Risku Constructivist Consequences: does tramslation
need reality?
Sabine Weiss Cooperation of different neural networks during
single word and sentence processing
R e g i s t r a t i o n
___________________________________________________________________________
To register please fill out the registration form at the bottom of this CFP
and send it by...
o Email to franz-markus.peschl@univie.ac.at, or by
o Fax to +43-1-408-8838 (attn. M.Peschl), or by
o Mail to Markus Peschl, Dept.for Philosophy of Science (address below)
Registration Fee (includes admission to talks, presentations, and
proceedings):
Member * 1300 ATS (about 118 US$)
Non-Member 1800 ATS (about 163 US$)
Student Member ** 500 ATS (about 45 US$)
Student Non-Member 1300 ATS (about 118 US$)
*) Members of the Austrian Society of Cognitive Science
**) Requires proof of valid student ID
C o n f e r e n c e S i t e a n d A c c o m o d a t i o n
___________________________________________________________________________
The conference takes place in a small beautiful baroque castle in the
suburbs of Vienna; the address is:
Schloss Neuwaldegg
Waldegghofg. 5
A-1170 Wien
Austria
Tel: +43 1 485 3605
Fax: +43 1 485 3605-112
It is surrounded by a beautiful forest and a good (international and
Viennese gastronomic) infrastructure. On the tram it takes only 20 minutes
to the center of Vienna (see
overview).
(Limited) Accommodation is provided by the castle
(about 41 US$ per night (single), 30 US$ per night, per person (double)
including breakfast). Please contact the telephone number above. You can
find more information about Vienna and accommodation at the
Vienna Tourist Board
or at the Intropa Travel agent Tel: +43-1-5151-242.
Note:
In case you want to stay over the weekend we refer you to the following
hotel which is near the conference site (single about 75 US$ / 850 ATS
per night):
Hotel Jaeger
Hernalser Hauptstrasse 187
A-1170 Wien
Austria
Tel: +43 1 486 6620
Fax: +43 1 486 6620 8
D e s t i n a t i o n V i e n n a ?
___________________________________________________________________________
Vienna, Austria, can be reached internationally by plane or train. The
Vienna Schwechat airport is located about 16 km from the city center. From
the airport, the city air-terminal can be reached by bus (ATS 60.- per
person) or taxi (about ATS 400). Rail-passengers arrive at one of the
main stations which are located almost in the city center. From the
air-terminal and the railway stations the congress site and hotels can be
reached easily by underground (U-Bahn), tramway, or bus. A detailed
description will be given to the participants.
In May the climate is mild in Vienna. It is the time when spring is at
its climax and everything is blooming. The weather is warm with occasional
(rare) showers. The temperature is about 18 to 24 degrees Celsius.
More information about Vienna and Austria on the web:
Welcome to Vienna Scene
Vienna City
Wiener Festwochen - Vienna Festival
Public Transport in Vienna (subway)
Welcome to Austria
General information about Austria
Austria Annoted
S c i e n t i f i c C o m m i t t e e
___________________________________________________________________________
R. Born Univ. of Linz (A)
R. Born Univ. of Linz (A)
G. Dorffner Univ. of Vienna (A)
E. v. Glasersfeld Univ. of Amherst, MA (USA)
S. Harnad Univ. of Southampton (GB)
M. Peschl Univ. of Vienna (A)
A. Riegler Univ. of Zurich (CH)
H. Risku Univ. of Skovde (S)
M. Scheutz Univ. of Indiana (USA)
W. Singer Max Planck Institut, Frankfurt (D)
S. Sjoelander Linkoeping University (S)
A. v. Stein Neuroscience Institute, La Jolla (USA)
O r g a n i z i n g C o m m i t t e e
___________________________________________________________________________
M. Peschl Univ. of Vienna (A)
A. Riegler Univ. of Zurich (CH)
S p o n s o r i n g O r g a n i z a t i o n s
___________________________________________________________________________
o Christian Doppler Laboratory for Expert Systems
(Vienna University of Technology)
o Oesterreichische Forschgungsgemeinschaft
o Austrian Federal Ministry of Science, Transport and the Arts
o City of Vienna
A d d i t i o n a l I n f o r m a t i o n
___________________________________________________________________________
For further information on the conference contact:
Markus Peschl
Dept. for Philosophy of Science
University of Vienna
Sensengasse 8/10
A-1090 Wien
Austria
Tel: +43-1-402-7601/41
Fax: +43-1-408-8838
Email: franz-markus.peschl@univie.ac.at
General information about the Austrian Society for Cognitive Science can be
found on the Society webpage or by
contacting
Alexander Riegler
AILab, Dept. of Computer Science
University of Zurich
Winterthurerstr. 190
CH-8057 Zurich
Switzerland
Email: riegler@ifi.unizh.ch
R e g i s t r a t i o n f o r m
___________________________________________________________________________
I participate at the Workshop "New Trends in Cognitive Science (NTCS'97)"
Full Name
........................................................................
Full Postal Address:
........................................................................
........................................................................
........................................................................
Telephone Number (Voice): Fax:
..................................... ..................................
Email address:
........................................................................
Payment in ATS (= Austrian Schillings; 1 US$ is currently about 11 ATS).
This fee includes admission to talks, presentations, and proceedings:
[ ] Member * 1300 ATS (about 118 US$)
[ ] Non-Member 1800 ATS (about 163 US$)
[ ] Student Member ** 500 ATS (about 45 US$)
[ ] Student Non-Member 1300 ATS (about 118 US$)
*) Members of the Austrian Society of Cognitive Science
**) Requires proof of valid student ID
Total: .................... ATS
[ ] Visa
[ ] Master-/Eurocard
Name of Cardholder ........................................
Credit Card Number ........................................
Expiration Date .................
Date: ................ Signature: ........................................
Please send this form by...
o Email to franz-markus.peschl@univie.ac.at, or by
o Fax to +43-1-408-8838 (attn. M.Peschl), or by
o Mail to Markus Peschl, Dept.for Philosophy of Science,
Univ. of Vienna, Sensengasse 8/10, A-1090 Wien, Austria