From giles@research.nj.nec.com Sun Mar 19 06:41:55 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Sun, 19 Mar 95 06:41:52 -0600; AA11212
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Sun, 19 Mar 95 06:41:49 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ad29547;
17 Mar 95 20:46:31 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id ab29517;
17 Mar 95 20:19:20 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa04855;
17 Mar 95 20:18:47 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa06560;
17 Mar 95 16:28:50 EST
Received: from zingo.nj.nec.com by EDRC.CMU.EDU id aa05486;
17 Mar 95 16:28:04 EST
Received: by zingo (920330.SGI/YDL1.4-910307.16)
id AA16961(zingo); Fri, 17 Mar 95 16:26:12 -0500
Received: by alta (5.52/cliff's joyful mailer #2)
id AA17893(alta); Fri, 17 Mar 95 16:26:10 EST
Date: Fri, 17 Mar 95 16:26:10 EST
From: Lee Giles
Message-Id: <9503172126.AA17893@alta>
To: connectionists@cs.cmu.edu
Subject: Computational capabilities of recurrent NARX neural networks
Cc: giles@research.nj.nec.com
The following Technical Report is available via the University of Maryland
Department of Computer Science and the NEC Research Institute archives:
_____________________________________________________________________________
Computational capabilities of recurrent NARX neural networks
UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-12 AND CS-TR-3408
H. T. Siegelmann[1], B. G. Horne[2], C. L. Giles[2,3]
[1] Dept. of Information Systems Engineering, Technion, Haifa 32000, Israel
[2] NEC Research Institute, 4 Independence Way, Princeton, NJ 08540
[3] UMIACS, University of Maryland, College Park, MD 20742
iehava@ie.technion.ac.il
{horne,giles}@research.nj.nec.com
Recently, fully connected recurrent neural networks have been proven
to be computationally rich --- at least as powerful as Turing
machines. This work focuses on another network which is popular in
control applications and has been found to be very effective at
learning a variety of problems. These networks are based upon
Nonlinear AutoRegressive models with eXogenous Inputs (NARX models),
and are therefore called {\em NARX networks}. As opposed to other
recurrent networks, NARX networks have a limited feedback which comes
only from the output neuron rather than from hidden states. They are
formalized by
\[
y(t) = \Psi
\left( \rule[-1ex]{0em}{3ex}
u(t-n_u), \ldots, u(t-1),
u(t), y(t-n_y), \ldots, y(t-1)
\right),
\]
where $u(t)$ and $y(t)$ represent input and output of the network at
time $t$, $n_u$ and $n_y$ are the input and output order, and the
function $\Psi$ is the mapping performed by a Multilayer Perceptron.
We constructively prove that the NARX networks with a finite number of
parameters are computationally as strong as fully connected recurrent
networks and thus Turing machines. We conclude that in theory one can
use the NARX models, rather than conventional recurrent networks
without any computational loss even though their feedback is limited.
Furthermore, these results raise the issue of what amount of feedback
or recurrence is necessary for any network to be Turing equivalent and
what restrictions on feedback limit computational power.
-------------------------------------------------------------------------
http://www.neci.nj.nec.com/homepages/giles.html
http://www.cs.umd.edu/TRs/TR-no-abs.html
or
ftp://ftp.nj.nec.com/pub/giles/papers/NARX.capabilities.ps.Z
--------------------------------------------------------------------------
--
C. Lee Giles / NEC Research Institute / 4 Independence Way
Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482
URL http://www.neci.nj.nec.com/homepages/giles.html
==
From Frank.Kelly@cs.tcd.ie Tue Mar 21 00:02:37 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 21 Mar 95 00:02:34 -0600; AA12163
Message-Id: <9503210602.AA25483@lucy.cs.wisc.edu>
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 21 Mar 95 00:02:32 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa04010;
20 Mar 95 23:15:34 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa04008;
20 Mar 95 22:59:07 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa07098;
20 Mar 95 22:59:00 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa06184; 20 Mar 95 7:23:35 EST
Received: from [134.226.32.17] by CS.CMU.EDU id aa26091; 20 Mar 95 7:22:07 EST
Received: from cs.tcd.ie by ashe.cs.tcd.ie id <05764-0@ashe.cs.tcd.ie>;
Mon, 20 Mar 1995 12:18:53 +0000
Subject: Connectionist models of Figure-Ground Segregation (Problems?)
To: connectionists@cs.cmu.edu
Date: Mon, 20 Mar 1995 12:18:45 +0000 (WET)
X-Mailer: ELM [version 2.4 PL20]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 4063
From: Frank Kelly
Sender: Frank.Kelly@cs.tcd.ie
Hello,
I am doing a project on Nonlinear Coupled Oscillators applied to
Figure-Ground Segregation. Current models I have examined are included
below my mail.sig.
Basically the question I would like to pose is the following:
Although all of these models 'solve' figure-ground segregation to some
degree, can anyone say which model is 'best' and what crtieria can we
base this upon?
e.g.
One of the key criteria for my project is speed, so what I would be
interested in knowing is:
Which model is fastest and/or does any model approach the
speed at which the human visual system segregates figure and ground.
Other criteria would be :
* Resistance to Noise
* Biological Plausibility
* Model Complexity (e.g. does the neurons model allow for orientation
selectivity, does the model require full connectivity between all
nodes)
*Use of attentional mechanisms
I would appreciate any light people could throw on this subject of
finding a 'best' model, especially experimental results/papers.
BTW, If anyone knows of any other systems (or has comments to make on any of
the above systems) I would be grateful if you could contact me.
Many Thanks in advance,
--Frank Kelly
= Frank.Kelly@cs.tcd.ie | AI group, Dept. of Computer Science, =
= Work: +353-1-608 1800 | Trinity College, Dublin 2. Ireland. =
= WWW : http://www.cs.tcd.ie/www/kellyfj/kellyfj.html =
So far I have found the following systems:
--------------------------------------------
[Von der Malsburg & Schneider 86]
Von der malsburg, C., and W. Schneider A neural Cocktail-Party
Processor in Biological Cybernetics 54, 29-40 (1986)
[Von der Malsburg & Buhmann 92]
Von der Malsburg, C., and J. Buhmann Sensory Segmentation with
coupled neural oscillators in Biological Cybernetics 67, 233-242 (1992)
[Sompolinsky et al 90]
Sompolinsky, H., Golomb, D., and D. Kleinfeld Global processing
of visual stimuli in a neural network of coupled oscillators in
Proceedings of the National Academy of Sciences, USA Vol.87,
pp.7200-7204, September 1990.
[Sejnowski & Hinton 87]
Sejnowski, T.J., and G.E. Hinton Separating Figure from Ground
with a Boltzmann Machine in (Arbib 87)
[Pabst et al. 89]
Pabst, M., H.J. Reitboeck, and R. Eckhorn A model of Preattentive region
definition based on texture analysis in (Cotterill 89)
[Konig et al. 92]
Konig, P., Janosch, B., and T.B. Schillen Stimulus-Dependent
Assembly Formation of Oscillatory Responses : III. Learning in Neural
Computation 4, 666-681 (1992)
[Kammen et al. 89]
Kammen, D.M., P.J. Holmes, and C. Koch Cortical Architecture and
Oscillations in Neuronal Networks : Feedback vs. Local Coupling in
(Cotterill 89)
[Grossberg & Somers 91]
Grossberg, S., and D. Somers Synchronized oscillations during
cooperative feature linking in a cortical model of visual perception in
Neural Networks Vol. 4 pp. 453-466
[Fellenz 94]
Fellenz W.A. A Neural Network for Preattentive Perceptual Grouping in
Proceedings of the Irish Neural Networks Conference 1994
Univeristy College Dublin, Sept.12-13, 1994
[Eckhorn et al 89]
Eckhorn, R., H.J. Reitboeck, M. Arndt, and P.Dicke A Neural
Network for feature linking via synchronous activity in (Cotterill 89)
[Yamaguchi & Hiroshi 94]
Yamaguchi, Y., and S. Hiroshi Pattern recognition with figure-ground
seperation by generation of coherent oscillations in Neural Networks
Vol.3, 1994, pp.153-170
[Campbell and Wang 94]
Campbell, S., and D. Wang Synchronization and Desynchronization
in a Network of Locally Coupled Wilson-Cowan Oscillators in Technical
Report OSU-CISRC-8/94-TR43, Lab for AI Research, Dept. of Computer and
Information Science and Center for Cognitive Science, The Ohio State
University, Columbus, Ohio 43210-1277, USA
[Sporns et al. 91]
Sporns, O. Tononi, G. and G.M. Edelman Modeling perceptual
grouping and figure-ground segregation by means of active reentrant
connections in Proc. Natl. Acad. Sci. USA Vol.88 oo.129-133, January
1991
n.b.
[Cotterill 89]
Cotterill, R.M.J. Models of Brain Function 1989
From David_Redish@GS151.SP.CS.CMU.EDU Tue Mar 21 21:02:11 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Tue, 21 Mar 95 21:02:08 -0600; AA09349
Message-Id: <9503220302.AA27309@lucy.cs.wisc.edu>
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Tue, 21 Mar 95 21:02:05 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa05296;
21 Mar 95 21:13:17 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa05294;
21 Mar 95 21:02:48 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa07971;
21 Mar 95 21:02:07 EST
Return-Path:
To: Connectionists@cs.cmu.edu
Subject: Paper available: Navigating with Landmarks
Date: Sat, 18 Mar 95 12:13:05 EST
From: David_Redish@GS151.SP.CS.CMU.EDU
The following paper is now available electronically (via the Web)
"Navigating with Landmarks:
Computing Goal Locations from Place Codes"
A. David Redish and David S. Touretzky
Carnegie Mellon University
to appear in _Symbolic Visual Learning_,
K. Ikeuchi and M. Veloso, eds.,
Oxford University Press.
A computer model of rodent navigation, based on coupled mechanisms for
place recognition, path integration, and maintenance of head
direction, offers a way to operationally combine constraints from
neurophysiology and behavioral observation. We describe how one such
model reproduces a variety of experiments by Collett, Cartwright, and
Smith (J. Comp Phys. A 158:835-851) in which gerbils learn to find a
hidden food reward, guided by an array of visual landmarks in an open
arena. We also describe some neurophysiological predictions of the
model; these may soon be verified experimentally. Portions of the
model have been implemented on a mobile robot.
------------------------------------------------------------
gzipped:
http://www.cs.cmu.edu:8001/Web/People/dredish/pub/vislearn-web.ps.gz
unix compressed:
http://www.cs.cmu.edu:8001/Web/People/dredish/pub/vislearn-web.ps.Z
For other papers of ours, see
http://www.cs.cmu.edu:8001/Web/People/dredish/bibliography.html
------------------------------------------------------------
Notes:
This paper contains large compressed postscript figures and
may take a long time to print out on some printers.
This paper will sometimes produce an "unable to uncompress file"
error, however, my experience has been that this is a spurious warning
and the paper uncompresses correctly.
Any problems, contact
David Redish
dredish@cs.cmu.edu
From hali@sans.kth.se Fri Mar 24 02:27:06 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 02:27:00 -0600; AA13971
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 02:26:57 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa09510;
24 Mar 95 2:23:12 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa09508;
24 Mar 95 2:06:37 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa11864;
24 Mar 95 2:06:22 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id ab17024;
22 Mar 95 17:19:14 EST
Received: from thalamus.sans.kth.se by EDRC.CMU.EDU id aa25739;
22 Mar 95 17:15:19 EST
Received: by thalamus.sans.kth.se (5.65c8/SANS-1.0)
id AA07496; Wed, 22 Mar 1995 23:14:46 +0100
Date: Wed, 22 Mar 1995 23:14:46 +0100
From: Hans Liljenstrom
Message-Id: <199503222214.AA07496@thalamus.sans.kth.se>
To: connectionists@cs.cmu.edu
Subject: Workshop on Fluctuations in Biology
**********************************************************************
First announcement of an interdisciplinary workshop
organized in collaboration with
the Swedish Council for Planning and Coordination of Research (FRN)
THE ROLE AND CONTROL OF RANDOM EVENTS IN BIOLOGICAL SYSTEMS
Sigtuna, Sweden
4-9 September 1995
MOTIVATION
Life is normally associated with a high degree of order and
organization. However, disorder in various contexts referred to as
fluctuations, noise or chaos is also a crucial component of many
biological processes. For example, in evolution random errors in the
reproduction of the genetic material provides a variation that is
fundamental for the selection of adaptive organisms. At a molecular
level, thermal fluctuations govern the movements and functions of the
macromolecules in the cell. Yet, it is also clear that too large a
variation may have disastrous effects. Uncontrolled processes need
stabilizing mechanisms. More knowledge of the stability requirements
of biological processes is needed in order to better understand these
problems, which also have important medical applications. Many
diseases, for instance certain degenerations of brain cells, are
caused by failure of the stabilizing mechanisms in the cell. Stability
is also important and difficult to achieve in biotechnological
applications.
In particular, there is randomness in structure and function of the
neural networks of the brain. Spontaneous firing of neurons seems to
be important for maintaining an adequate level of activity, but does
this "neuronal noise" have any other significance? What are the
effects of errors and fluctuations in the information processing of
the brain? Can these microscopic fluctuations be amplified to
provide macroscopic effects? Often, one cannot easily determine
whether an apparently random process is due to noise, governed by
uncontrolled degrees of freedom, or if it is a result of
"deterministic chaos". Would the difference be of any importance for
biology? Especially, could chaos, which is characterized by
sensitivity and divergence, be useful for any kind of information
processing that normally depends upon stability and convergence? Could
chaos in the neural dynamics of the brain perhaps be responsible for
(creative) thinking?
OBJECTIVE
The objective of this meeting is to address the questions and problems
given above, for a deeper understanding of the effects of disorder in
biological systems. Fluctuations and chaos have been extensively
studied in physics, but to a much lesser degree in biology. Important
concepts from physics, such as "noise-induced state transitions" and
"controlled chaos" could also be of relevance for biological
systems. Yet, little has been done about such applications and a more
critical analysis of the positive and negative effects of disorder for
living systems is needed. It is essential to make concrete and
testable hypotheses, and to avoid the kind of superficial and more
fashionable treatment that often dominates the field. By bringing
together scientists with knowledge and insights from different
disciplines we hope to shed more light on these problems, which we
think are profound for understanding the phenomenon of life.
SCOPE
A number of invited speakers will provide presentations on the
fundamental problems, but we invite further contributions, in the form
of short lectures, computer demonstrations and posters by additional
participants. We expect everyone to take an active part in the
program, in particular in the general discussions. In order to
maintain close contact between all participants, and to provide an
efficient workshop atmosphere, the number of participants will be
limited to approximately fifty people. A proceedings volume is
planned.
LOCATION
The location of the workshop will be at a unique guest home in
Sigtuna, a royal town in early Middle Ages. Situated at the shore of
the beautiful lake Malaren, Sigtuna is only 15 km away from the
Stockholm Intl. Airport and 45 km from downtown Stockholm. It is also
close to the city of Uppsala, which is famous for its Viking graves and
for the oldest university and largest cathedral in Scandinavia. The area
around Sigtuna is full of cultural and historical sites and the great
number of runic stones is unique in the world. There will be
excursions and opportunities for sightseeing. The total cost,
including accomodation, all meals and registration fee is 4500 SEK.
Depending on funding availability, we may be able to give some
economical support.
ORGANIZING COMMITTEE
Clas Blomberg, Dept. of Physics, Royal Institute of Technology, Stockholm
Hans Liljenstrom, Dept. of Comp. Sci., Royal Institute of Technology, Stockholm
Peter Arhem, Nobel Inst. for Neurophysiology, Karolinska Institutet, Stockholm
CONFIRMED INVITED SPEAKERS
Luigi Agnati, Dept. of Neuroscience, Karolinska Inst., Stockholm, Sweden
Agnes Babloyantz, Dept of Chem. Physics, Free University of Brussels, Belgium
Adi Bulsara, NRad, San Diego, USA
Rodney Cotterill, Div. of Biophysics, Technical Univ. of Denmark
Walter Freeman , Dept. of Molecular and Cell Biology, UC Berkeley, USA
Hermann Haken, Inst. f. Theor. Physik und Synergetik, Univ. Stuttgart, Germany
Christof Koch, Computation and Neural Systems Program, Caltech, Pasadena, USA
Larry Liebovitch, Center for Complex Systems, FAU, Boca Raton, USA
Michael Mackey, Dept. of Physiology, McGill University, Montreal, Canada
Frank Moss, Dept. of Physics, University of Missouri, St Louis, USA
Sakire Pogun, Center for Brain Research, Ege University, Izmir, Turkey
Ichiro Tsuda, Dept. of Mathematics, Hokkaido University, Sapporo, Japan
FURTHER INFORMATION
Hans Liljenstrom
SANS - Studies of Artifical Neural Systems
Dept. of Numerical Analysis and Computing Science
Royal Institute of Technology
S-100 44 Stockholm, SWEDEN
Email: hali@sans.kth.se
Phone: +46-(0)8-790 6909
Fax: +46-(0)8-790 0930
========================================================================
If you are interested in participating in this workshop, please fill in
and return the preliminary registration form below:
------------------------------------------------------------------------
Name:
Address:
Student (yes/no):
Willing to contribute with a presentation (yes/no):
Preliminary title/subject:
------------------------------------------------------------------------
From tony@salk.edu Fri Mar 24 09:16:20 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 09:16:18 -0600; AA29314
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 09:16:16 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab09510;
24 Mar 95 2:24:20 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id ab09508;
24 Mar 95 2:06:43 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa11868;
24 Mar 95 2:06:29 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa18349; 22 Mar 95 20:21:59 EST
Received: from helmholtz.salk.edu by CS.CMU.EDU id aa23877;
22 Mar 95 20:21:11 EST
Received: from burt.sdsc.edu (burt.salk.edu) by salk.edu (4.1/SMI-4.1)
id AA04504; Wed, 22 Mar 95 17:20:58 PST
Date: Wed, 22 Mar 95 17:20:58 PST
From: Tony Bell
Message-Id: <9503230120.AA04504@salk.edu>
To: comp-neuro@smaug.bbb.caltech.edu, connectionists-request@cs.cmu.edu
Subject: short TR on noisy neurons
----------------------------------
FTP-host: ftp.salk.edu
FTP-file: pub/tony/bell.noisy.ps.Z
----------------------------------
The following (short) technical report is ftp-able from the
Salk Institute. The file is called bell.noisy.ps.Z, it is 0.65 Mbytes
compressed, 1.9 Mbytes uncompressed, and 10 pages long (4 figures).
It describes work presented at the Computation and Neural Systems 1994
meeting (CNS '94), but which was late for inclusion in the Proceedings.
-----------------------------------------------------------------------
Technical Report no. INC-9502, February 1995,
Institute for Neural Computation, UCSD, San Diego, CA 92093-0523
`BALANCING' OF CONDUCTANCES MAY
EXPLAIN IRREGULAR CORTICAL SPIKING.
Anthony J. Bell, Zachary F. Mainen,
Misha Tsodyks & Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute
10010 N. Torrey Pines Road
La Jolla, California 92037
ABSTRACT
Five related factors are identified which enable single compartment
Hodgkin-Huxley model neurons to convert random synaptic input into
irregular spike trains similar to those seen in {\em in vivo} cortical
recordings. We suggest that cortical neurons may operate in a narrow
parameter regime where synaptic and intrinsic conductances are balanced
to reflect, through spike timing, detailed correlations in the inputs.
-----------------------------------------------------------------------
Can be obtained via ftp as follows:
unix> ftp ftp.salk.edu (or 198.202.70.34)
(log in as "anonymous", e-mail address as password)
ftp> binary
ftp> cd pub/tony
ftp> get bell.noisy.ps.Z
ftp> quit
unix> uncompress bell.noisy.ps.Z
unix> lpr bell.noisy.ps
From rafal@mech.gla.ac.uk Fri Mar 24 19:08:16 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 19:08:12 -0600; AA06592
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 19:08:10 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa12284;
24 Mar 95 17:50:10 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12282;
24 Mar 95 17:42:05 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14339;
24 Mar 95 17:41:46 EST
Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa11162; 24 Mar 95 7:05:00 EST
Received: from [130.209.12.9] by RI.CMU.EDU id aa17993; 24 Mar 95 7:04:28 EST
Message-Id: <10527.199503241204@gryphon.mech.gla.ac.uk>
Date: Fri, 24 Mar 1995 12:04:20 GMT
From: Rafal W Zbikowski
Received: (from rafal@localhost sender rafal) by gryphon.mech.gla.ac.uk (8.6.10/UK-2.2a/mech-sun4) id MAA10527 for Connectionists@cs.cmu.edu; Fri, 24 Mar 1995 12:04:20 GMT
To: Connectionists@CS.CMU.EDU
Subject: Workshop on Neurocontrol
Neural Adaptive Control Technology Workshop: NACT I
18--19 May, 1995
University of Glasgow, Scotland, UK
The first of a series of three workshops on Neural Adaptive
Control Technology (NACT) will take place on May 18--19, 1995 in
Glasgow, Scotland. This event is being organised in connection
with a three-year European Union funded Basic Research Project in
the ESPRIT framework. The project is a collaboration between
Daimler-Benz Systems Technology Research, Berlin, Germany and the
Control Group, Department of Mechanical Engineering, University of
Glasgow, Glasgow, Scotland.
The project is a study of the fundamental properties of neural
network based adaptive control systems. Where possible, links
with traditional adaptive control systems will be exploited. A
major aim is to develop a systematic engineering procedure for
designing neural controllers for non-linear dynamic systems. The
techniques developed will be evaluated on concrete industrial
problems from within the Daimler-Benz group of companies:
Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The
project leader is Dr.~Ken Hunt (Daimler-Benz) and the other
principal investigator is Professor Peter Gawthrop (University of
Glasgow).
Call for Participation, Provisional Programme, registration
form and hotel booking can be found as the PostScript files:
call.ps Call for Participation
proviso.ps Provisional Programme
register.ps registration & hotel
on the servers detailed below.
FTP server
^^^^^^^^^^
anonymous FTP to: ftp.mech.gla.ac.uk (130.209.12.14)
directory: nact
World-Wide Web server
^^^^^^^^^^^^^^^^^^^^^
http://www.mech.gla.ac.uk/~nactftp/nact.html
WWW server provides a link to the FTP server.
Rafal Zbikowski
Control Group, Department of Mechanical Engineering,
Glasgow University, Glasgow G12 8QQ, Scotland, UK
rafal@mech.gla.ac.uk
From zbyszek@uncc.edu Fri Mar 24 19:08:21 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 19:08:15 -0600; AA06597
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 19:08:12 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa12305;
24 Mar 95 17:56:14 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12288;
24 Mar 95 17:44:04 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14345;
24 Mar 95 17:42:53 EST
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa12549;
24 Mar 95 9:19:34 EST
Received: from mail.uncc.edu by MAILBOX.SRV.CS.CMU.EDU id aa24775;
24 Mar 95 9:05:03 EST
Received: from unccsun.uncc.edu (unccsun.uncc.edu [152.15.10.88]) by mail.uncc.edu (8.6.10/8.6.4) with ESMTP id JAA25673; Fri, 24 Mar 1995 09:00:18 -0500
From: Zbigniew Michalewicz
Received: (zbyszek@localhost) by unccsun.uncc.edu (8.6.10/8.6.4) id JAA17376; Fri, 24 Mar 1995 09:00:17 -0500
Date: Fri, 24 Mar 1995 09:00:17 -0500
Message-Id: <199503241400.JAA17376@unccsun.uncc.edu>
To: EP-List@magenta.me.fau.edu, Genetic-Programming@CS.Stanford.Edu,
TIERRA@life.slhs.udel.edu, alife@cognet.ucla.edu,
cells@tce.ing.uniroma1.it, cellular-automata@think.com,
colt@cs.uiuc.edu, connectionists@MAILBOX.SRV.CS.CMU.EDU,
evolutionary-computing@mailbase.ac.uk,
ga-molecule-approval@interval.com, gann-list@cs.iastate.edu,
ml@ics.uci.edu, neuron-request@CATTELL.psych.upenn.edu
Subject: 3rd IEEE ICEC '96 call for papers
Content-Length: 12740
------------------------ CALL FOR PAPERS ------------------------------------
1996 IEEE International Conference on
Evolutionary Computation (ICEC'96)
Nagoya, Japan, May 20-22, 1996
3rd IEEE ICEC'96 is co-sponsored by IEEE Neural Network Council (NNC) and
Society of Intrumentation and Control Engineers (SICE).
3rd IEEE ICEC'96 will be organized in conjunction with the conference of
Artificial Life (Kyoto, JAPAN, May 16-18, 1996).
TOPICS:
Theory of evolutionary computation
Applications of evolutionary computation
Efficiency / robustness comparisons with other direct search algorithms
Parallel computer implementations
Artificial life and biologically inspired evolutionary computation
Evolutionary algorithms for computational intelligence
Comparisons between difference variants of evolutionary algorithms
Machine learning applications
Genetic algorithm and selforganization
Evolutionary computation for neural networks
Fuzzy logic in evolutionary algorithms
SUBMISSION PROCEDURE:
Prospective authors are invited to submit papers related to the listed
topics for oral or poster presentation. Five (5) copies of the paper must
be submitted for review. Papers should be printed on letter size white
paper, written in English in two-column format in Times or similar font
style, 10 points or larger with 2.5 cm margins on all four sides. A length
of four pages is encouraged, and a limit of six pages, including figures,
tables and references will be enforced.
Centered at the top of the first page should be the complete title of the
paper and the name(s), affiliation(s) and address(es) of the author(s).
All papers (except those submitted for special sessions - which may have
different deadlines - see information on special sessions below) should be
sent to:
Toshio Fukuda, General Chair
Nagoya University
Dept. of Micro System Engineering and
Dept. of Mechano-Informatics and Systems
Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN
Phone: +81-52-789-4478 Fax: +81-52-789-3909
E-mail: fukuda@mein.nagoya-u.ac.jp
IMPORTANT DATES:
Proposal for tutorial/exhibits November 15, 1995
Submission of Papers (except for special sessions) December 20, 1995
Notification of acceptance February 20, 1996
Submission of camera-ready papers April 10, 1996
Program Co-chairs:
Thomas Baeck
Informatik Centrum Dortmund (ICD)
baeck@ls11.informatik.uni-dortmund.de
Hiroaki Kitano
Sony Computer Science Laboratory
kitano@csl.sony.co.jp
Zbigniew Michalewicz
University of North Carolina - Charlotte
zbyszek@uncc.edu
There are several special sessions organized for the 3rd IEEE ICEC '96;
so far these include:
*********************************************************************
"Constrained Optimization, Constraint Satisfaction and EC"
*********************************************************************
Evolutionary Computation has proved its merit in treating difficult problems
in, for example, numerical optimization and machine learning. Nevertheless,
problems where constraints on the search space (i.e., on the candidate
solutions) play an important role have received relatively little attention.
In real-world problems, however, the presence of constraints seems to be
rather the rule than the exception. The class of constrained problems can be
divided into Constraint Satisfaction Problems (CSP) and Constrained
Optimization Problems (COP). This special session addresses both subclasses,
and aims to explore the extent to which EC can usefully tackle problems of
these kinds.
The session is organized by
Gusz Eiben, chair (Utrecht University, gusz@cs.ruu.nl)
Dave Corne (University of Edinburgh,dave@aifh.ed.ac.uk)
Jurgen Dorn (Technical University of Vienna, dorn@vexpert.dbai.tuwien.ac.at)
Peter Ross (University of Edinburgh, peter@aisb.ed.ac.uk)
Submission:
Four (4) copies of complete (6 pages maximum) papers, preferably in PostScript
form, should be submitted no later than December 15, 1995 to:
A.E. Eiben | email: gusz@cs.ruu.nl
Department of Computer Science |
Utrecht University | Phone: +31-(0)30-533619
P.O.Box 80089 |
3508 TB Utrecht | Fax: +31-(0)30-513791
The Netherlands |
All papers will be reviewed, and authors will be notified of the inclusion of
their papers in the special session by February 15, 1996.
Any questions regarding this special session can be directed to
any of the organizers.
*********************************************************************
"Evolutionary Artificial Neural Networks"
*********************************************************************
Evolutionary Artificial Neural Networks (EANNs) can be considered as a
combination of artificial neural networks (ANNs) and evolutionary search
algorithms. Three levels of evolution in EANNs have been studied recently,
i.e., the evolution of connection weights, architectures, and learning rules.
Major issues in the research of EANNs include their scalability,
generalisation ability and interactions among different levels of evolution.
This special session will serve as a forum for both researchers and
practitioners to discuss these important issues and exchange their latest
research results/ideas in the area.
This special session is organized by X. Yao (xin@cs.adfa.oz.au).
Prospective authors are invited to submit four (4) copies of their papers to
the following address no later than 20 December 1995. (Please do not include
author's information, e.g., name and address, in three of four submitted
copies):
Xin Yao
Department of Computer Science
University College, The University of New South Wales
Australian Defence Force Academy
Canberra, ACT 2600, Australia
Ph: +61 6 268 8819
Fax: +61 6 268 8581
Email: xin@csadfa.cs.adfa.oz.au
All papers will be reviewed. Notification of acceptance/rejection will be sent
out by 20 February 1996. The camera-ready copy must be submitted by 10 April
1996 for inclusion in the conference proceedings.
*********************************************************************
"Evolutionary Robotics and Automation"
*********************************************************************
More and more researchers are applying evolutionary computation techniques
to challenging problems in robotics and automation, where classical methods
fail to be effective. In addition to being vastly applicable to many hard
problems, evolutionary concepts inspire many researchers as well as users
to be fully creative in inventing their own versions of evolutionary
algorithms for the specific needs of different domains of problems.
This special session serves as a forum for exchanging research results in
this growing interdisciplinary area and for encouraging further exploration
of the fusion between evolutionary computation and intelligent robotics and
automation.
This special session is organized by J. Xiao (xiao@uncc.edu).
Four (4) copies of complete (6 pages maximum) papers should be submitted
no later than December 15, 1995 to:
Jing Xiao
Department of Computer Science
University of North Carolina - Charlotte
Charlotte, NC 28223
Phone: (704) 547-4883
Fax: (704) 547-3516
E-mail: xiao@uncc.edu
All papers will be reviewed, and authors will be notified of the inclusion
of their papers in the special session by February 15, 1996.
Any questions regarding this special session should be directed to
J. Xiao at the above address.
*********************************************************************
"Genetic programming"
*********************************************************************
The goal of automatic programming is to create, in an automated way, a
computer program that enables a computer to solve a problem. Genetic
programming extends the genetic algorithm to the domain of computer
programs. In genetic programming, populations of program are genetically
bred to solve problems. Genetic programming is a domain-independent
method for evolving computer programs that solves, or approximately
solves, a variety of problems from a variety of fields, including many
benchmark problems from machine learning and artificial intelligence such
as problems of control, robotics, optimization, game playing, and symbolic
regression (i.e., system identification, concept learning). Early versions
of genetic programming evolved programs consisiting of only a single part
(i.e., one main program).
The session is organized by
John R. Koza, Stanford University (Koza@Cs.Stanford.Edu),
Lee Spector, Hampshire College (LSPECTOR@hampshire.edu), and
Yuji Sato, Hitachi Ltd. Central Research Lab. (yuji@crl.hitachi.co.jp).
Prospective authors are encouraged to submit four (4) hard
copies of their papers (6 pages maximum) to be received
by Friday December 15, 1995 to:
John R. Koza
Computer Science Department
Margaret Jacks Hall
Stanford University
Stanford, California 94305-2140 USA
PHONE: 415-723-1517
FAX(Not for paper submission): 415-941-9430
E-MAIL: Koza@Cs.Stanford.Edu
All papers will be reviewed and authors will be notified
about acceptance/rejection by about Wednesday, February
15, 1996.
*********************************************************************
"Self-adaptation in evolutionary algorithms"
*********************************************************************
Evolutionary algorithms (EAs) with the ability to adapt internal
strategic parameters (like population size, mutation distribution,
type of recombination operator, selective pressure etc.)
during the search process usually find better solutions than
variants with fixed strategic parameters. Self-adaptation is very
useful if different (fixed) parameter settings produce large
differences in the solution quality of the algorithm. Most
experiences are available for (real-coded) EAs whose individuals
adapt their mutation distributions (or step sizes). Here, the
property to adjust the step size is induced by competetive
pressure among individuals. Evidently, self-adapting mechanisms can
be realized by competing subpopulations as well. The potential of
those EAs is essentially unexplored.
This special session is organized by Guenter Rudolph
(rudolph@ls11.informatik.uni-dortmund.de) and is intended to serve
as a forum to discuss new ideas and to address the question of a
theoretical treatment of self-adapting mechanisms.
Four (4) copies of complete papers (6 pages maximum) should be
submitted no later than December 15, 1995 to:
Guenter Rudolph
ICD Informatik Centrum Dortmund e.V.
Joseph-von-Fraunhofer-Str. 20
D-44227 Dortmund
Germany
Phone : +49 - (0)231 - 9700 - 365
Fax : +49 - (0)231 - 9700 - 959
E-mail: rudolph@ls11.informatik.uni-dortmund.de
All papers will be reviewed. Authors will be notified of
acceptance/rejection by February 15, 1996.
*********************************************************************
"Evolutionary algorithms and fuzzy systems"
*********************************************************************
Fuzzy sets (FS) and evolutionary algorithms have been already successfully
applied to many areas including fuzzy control and fuzzy clustering. There
are a number of facets of symbiosis between the technologies of FS and GA.
On one hand evolutionary computation enriches the optimization
environment for fuzzy systems. On the other, fuzzy sets supply a new
macroscopic and domain-specific insight into the fundamental mechanisms of
evolutionary algorithms (including fuzzy crossover, fuzzy reproduction,
fuzzy fitness function, etc.). The objective of this session is to foster
further interaction between researchers actively engaged in FS and GAs. The
session will provide a broad forum for exchanging ideas between academe and
industry and discussing recent pursuits in the area.
This special session is organized by Witold Pedrycz (pedrycz@ee.umanitoba.ca).
Prospective authors are encouraged to submit four (4) copies of their
papers (6 pages maximum) by December 15, 1995 to:
Witold Pedrycz
Department of Electrical and Computer Engineering
University of Manitoba
Winnipeg Canada RT 2N2
Phone : (204) 474-8380
Fax: (204) 261-4639
E-mail: pedrycz@ee.umanitoba.ca
All papers will be reviewed and authors will be notified about
acceptance/rejection by February 15, 1996.
*********************************************************************
*********************************************************************
The deadline for proposals for organizing a special session during the
3rd IEEE ICEC '96 is 20 August 1995; submit your proposal to any Program
Co-chair.
From omlinc@research.nj.nec.com Fri Mar 24 19:08:23 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 19:08:16 -0600; AA06600
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 19:08:14 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ab12305;
24 Mar 95 17:57:32 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12290;
24 Mar 95 17:44:54 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14350;
24 Mar 95 17:43:40 EST
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa16817;
24 Mar 95 13:12:12 EST
Received: from zingo.nj.nec.com by EDRC.CMU.EDU id aa07846;
24 Mar 95 13:11:37 EST
Received: by zingo (920330.SGI/YDL1.4-910307.16)
id AA23478(zingo); Fri, 24 Mar 95 13:10:56 -0500
Received: by arosa (5.52/cliff's joyful mailer #2)
id AA04631(arosa); Fri, 24 Mar 95 13:10:55 EST
Date: Fri, 24 Mar 95 13:10:55 EST
From: Christian Omlin
Message-Id: <9503241810.AA04631@arosa>
To: connectionists@cs.cmu.edu
Subject: TR available - fault-tolerant recurrent neural networks
The following Technical Report is available via the NEC Research Institute
archives:
__________________________________________________________________________________
Fault-Tolerant Implementation of Finite-State Automata
in Recurrent Neural Networks
RENSSELAER POLYTECHNIC INSTITUTE DEPT. OF COMPUTER SCIENCE TR CS 95-3
C.W. Omlin[1,2], C.L. Giles[1,3]
[1]NEC Research Institute, 4 Independence Way, Princeton, NJ 08540
[2]CS Department, Rensselaer Polytechnic Institute, Troy, NY 12180
[3]UMIACS, University of Maryland, College Park, MD 20742}
{omlinc,giles}@research.nj.nec.com
ABSTRACT
Recently, we have proven that the dynamics of any deterministic finite-state
automaton (DFA) with n states and m input symbols can be implemented in a sparse
second-order recurrent neural network (SORNN) with n+1 state neurons, O(mn)
second-order weights and sigmoidal discriminant functions. We investigate how that
constructive algorithm can be extended to fault-tolerant neural DFA implementations
where faults in an analog implementation of neurons or weights do not affect the
desired network performance. We show that tolerance to weight perturbation can be
achieved easily; tolerance to weight and/or neuron stuck-at-zero faults, however,
requires duplication of the network resources. This result has an impact on the
construction of neural DFAs with a dense internal representation of DFA states.
__________________________________________________________________________________
http://www.neci.nj.nec.com/homepages/omlin/omlin.html
or
ftp://ftp.nj.nec.com/pub/omlinc/fault_tolerance.ps.Z
__________________________________________________________________________________
From pja@barbarian.endicott.ibm.com Fri Mar 24 19:08:23 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 19:08:19 -0600; AA06610
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 19:08:16 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ac12305;
24 Mar 95 17:58:48 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12293;
24 Mar 95 17:45:49 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14361;
24 Mar 95 17:44:55 EST
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa17247;
24 Mar 95 13:38:08 EST
Received: from inetgw.lfs.loral.com by MAILBOX.SRV.CS.CMU.EDU id aa26159;
24 Mar 95 13:37:08 EST
Received: from barbarian.endicott.ibm.com by inetgw.fsc.ibm.com (AIX 3.2/UCB 5.64/4.03)
id AA19947; Fri, 24 Mar 1995 13:24:08 -0500
Received: by barbarian.endicott.ibm.com (AIX 3.2/UCB 5.64/4.03)
id AA07491; Fri, 24 Mar 1995 13:19:31 -0500
Date: Fri, 24 Mar 1995 13:19:31 -0500
From: "Peter J. Angeline"
Message-Id: <9503241819.AA07491@barbarian.endicott.ibm.com>
To: EP-List@magenta.me.fau.edu, Genetic-Programming@cs.stanford.edu,
TIERRA@life.slhs.udel.edu, alife@cognet.ucla.edu,
cells@tce.ing.uniroma1.it, cellular-automata@think.com,
colt@cs.uiuc.edu, connectionists@MAILBOX.SRV.CS.CMU.EDU,
evolutionary-computing@mailbase.ac.uk,
ga-molecule-approval@interval.com, gann-list@cs.iastate.edu,
ml@ics.uci.edu, neuron-request@CATTELL.psych.upenn.edu
Subject: CFP for 5th Annual Conference on Evolutionary Programming
Reply-To: pja@lfs.loral.com
--------------------------- CALL FOR PAPERS ------------------------------
EP'96
THE FIFTH ANNUAL CONFERENCE ON EVOLUTIONARY PROGRAMMING
SPONSORED BY THE EVOLUTIONARY PROGRAMMING SOCIETY
February 29 to March 3, 1996
Sheraton Harbor Island Hotel
San Diego, CA, USA
General Chairman:
Lawrence J. Fogel, Natural Selection, Inc.
Technical Program Co-Chairs:
Peter J. Angeline, Loral Federal Systems
Thomas Baeck, Informatik Centrum Dortmund
Thomas M. English, Texas Tech University
The Fifth Annual Conference on Evolutionary Programming will serve as a
forum for researchers investigating applications and theory of evolutionary
programming and other related areas in evolutionary and natural
computation. Authors are invited to submit papers which describe original
unpublished research in evolutionary programming, evolution strategies, genetic
algorithms and genetic programming, artificial life, cultural algorithms, and
other models that rely on evolutionary principles. Specific topics include but
are not limited to the use of evolutionary simulations in optimization, neural
network training and design, automatic control, image processing, and other
applications, as well as mathematical theory or empirical analysis providing
insight into the behavior of such algorithms. Of particular interest are
applications of simulated evolution to problems in biology.
Hardcopies of manuscripts must be received by one of the technical program
co-chairs by September 26, 1995. Electronic submissions cannot be accepted.
Papers should be clear, concise, and written in English. Papers received after
the deadline will be handled on a time- and space-available basis. The
notification of the program committee's review decision will be mailed by
November 30, 1995. Papers eligible for the student award must be marked
appropriately for consideration (see below). Camera ready papers are due at the
conference, and will be published shortly after its completion. Submissions
should be single-spaced, 12 pt. font and should not exceed 15 pages including
figures and references. Send five (5) copies of the complete paper to:
In Europe:
Thomas Baeck
Informatik Centrum Dortmund
Joseph-von-Fraunhofer-Str. 20
D-44227 Dortmund
Germany
Email: baeck@home.informatik.uni-dortmund.de
In US:
Peter J. Angeline
Loral Federal Systems
1801 State Route 17C
Mail Drop 0210
Owego, NY 13827
Email: pja@lfs.loral.com
-or-
Thomas M. English
Computer Science Department
Texas Tech University
Lubbock, Texas 79409-3104
Email: english@cs.ttu.edu
Authors outside Europe or the United States may send their paper to any of the
above technical chairmen at their convenience.
SUMMARY OF IMPORTANT DATES
--------------------------
September 26, 1995 Submissions of papers
November 30, 1995 Notification sent to authors
February 29, 1996 Conference Begins
Evolutionary Programming Society Award for Best Student Paper
-------------------------------------------------------------
In order to foster student contributions and encourage exceptional scholarship
in evolutionary programming and closely related fields, the Evolutionary
Programming Society awards one exceptional student paper submitted to the
Annual Conference on Evolutionary Programming. The award carries a $500 cash
prize and a plaque signifying the honor.
To be eligible for the award, all authors of the paper must be full-time
students at an accredited college, university or other educational
institution. Submissions to be considered for this award must be clearly marked
at the top of the title page with the phrase "CONSIDER FOR STUDENT AWARD." In
addition, the paper should be accompanied by a cover letter stating that (1)
the paper is to be considered for the student award (2) all authors are
currently enrolled full-time students at a university, college or other
educational institution, and (3) that the student authors are responsible for
the work presented. Only papers submitted to the conference and marked as
indicated will be considered for the award. Late submissions will not be
considered. Officers of the Evolutionary Programming Society, students under
their immediate supervision, and their immediate family members are not
eligible.
Judging will be made by officers of the Evolutionary Programming Society or by
an Awards Committee appointed by the president. Judging will be based on the
perceived technical merit of the student's research to the field of
evolutionary programming, and more broadly to the understanding of
self-organizing systems. The Evolutionary Programming Society and/or the Awards
Committee reserves the right not to give an award in any year if no eligible
student paper is deemed to be of award quality. Presentation of the Student
Paper Award will be made at the conference.
Program Committee:
J. L. Breeden, Santa Fe Institute
M. Conrad, Wayne State University
K. A. De Jong, George Mason University
D. B. Fogel, Natural Selection, Inc.
G. B. Fogel, University of California at Los Angeles
R. Galar, Technical University of Wroclaw
P. G. Harrald, University of Manchester Institute of Science and Technology
K. E. Kinnear, Adaptive Systems
J. R. McDonnell, Naval Command Control and Ocean Surveillance Center
Z. Michalewicz, University of North Carolina
F. Palmieri, University of Connecticut
R. G. Reynolds, Wayne State University
S. H. Rubin, Central Michigan University
G. Rudolph, University of Dortmund
N. Saravanan, Ford Research
H.-P. Schwefel, University of Dortmund
A. V. Sebald, University of California at San Diego
W. M. Spears, Naval Research Labs
D. E. Waagen, TRW Systems Integration Group
Finance Chair: V. W. Porto, Orincon Corporation
Local Arrangements: W. Page, Naval Command Control and Ocean Surveillance Center
From duff@wrath.cs.umass.edu Fri Mar 24 19:08:24 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Fri, 24 Mar 95 19:08:20 -0600; AA06612
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Fri, 24 Mar 95 19:08:18 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id ad12305;
24 Mar 95 18:00:18 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa12295;
24 Mar 95 17:47:32 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa14375;
24 Mar 95 17:47:10 EST
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id be21507; 24 Mar 95 17:35:28 EST
Received: from wrath.cs.umass.edu by CS.CMU.EDU id aa15609;
24 Mar 95 17:07:24 EST
Received: by wrath.cs.umass.edu (5.65/DEC-Ultrix/4.3)
id AA04229; Fri, 24 Mar 1995 17:06:59 -0500
Date: Fri, 24 Mar 1995 17:06:59 -0500
From: duff@wrath.cs.umass.edu
Message-Id: <9503242206.AA04229@wrath.cs.umass.edu>
To: connectionists@cs.cmu.edu
Subject: Tech Rept: Q-learning for Bandit Problems
The following technical report is available via anonymous ftp:
Q-LEARNING FOR BANDIT PROBLEMS
(COMPSCI Technical Report 95-26)
Michael Duff
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
duff@cs.umass.edu
Multi-armed bandits may be viewed as
decompositionally-structured Markov decision processes
(MDP's) with potentially very large state sets. A
particularly elegant methodology for computing optimal
policies was developed over twenty ago by Gittins
[Gittins \& Jones, 1974]. Gittins' approach reduces
the problem of finding optimal policies for the
original MDP to a sequence of low-dimensional stopping
problems whose solutions determine the optimal policy
through the so-called ``Gittins indices.'' Katehakis
and Veinott [Katehakis \& Veinott, 1987] have shown
that the Gittins index for a task in state $i$ may be
interpreted as a particular component of the
maximum-value function associated with the
``restart-in-$i$'' process, a simple MDP to which
standard solution methods for computing optimal
policies, such as successive approximation, apply.
This paper explores the problem of learning the Gittins
indices on-line without the aid of a process model; it
suggests utilizing task-state-specific Q-learning
agents to solve their respective restart-in-state-$i$
subproblems, and includes an example in which the
online reinforcement learning approach is applied to a
simple problem of stochastic scheduling---one instance
drawn from a wide class of problems that may be
formulated as bandit problems.
FTP-host: envy.cs.umass.edu
FTP-file: pub/duff/bandit.ps.Z
18 MBytes compressed / .46 MBytes uncompressed / 32 pages (8 figures)
FTP Instructions:
unix> ftp envy.cs.umass.edu
login: anonymous
password: (your email address)
ftp> cd pub/duff
ftp> binary
ftp> get bandit.ps.Z
ftp> quit
unix> uncompress bandit.ps.Z
unix> lpr bandit.ps
From john@dcs.rhbnc.ac.uk Sat Mar 25 21:23:31 1995
Received: from lucy.cs.wisc.edu by sea.cs.wisc.edu; Sat, 25 Mar 95 21:23:15 -0600; AA29150
Received: from TELNET-1.SRV.CS.CMU.EDU by lucy.cs.wisc.edu; Sat, 25 Mar 95 21:23:12 -0600
Received: from TELNET-1.SRV.CS.CMU.EDU by telnet-1.srv.cs.CMU.EDU id aa13992;
25 Mar 95 18:45:25 EST
Received: from DST.BOLTZ.CS.CMU.EDU by TELNET-1.SRV.CS.CMU.EDU id aa13990;
25 Mar 95 18:33:14 EST
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa15240;
25 Mar 95 18:32:23 EST
Received: from RI.CMU.EDU by B.GP.CS.CMU.EDU id aa03130; 25 Mar 95 12:04:28 EST
Received: from platon.cs.rhbnc.ac.uk by RI.CMU.EDU id aa23576;
25 Mar 95 12:03:55 EST
Received: from localhost (localhost [127.0.0.1])
by platon.cs.rhbnc.ac.uk (8.6.9/8.6.9) with SMTP id QAA21004 ;
Sat, 25 Mar 1995 16:56:08 GMT
From: John Shawe-Taylor
Message-Id: <199503251656.QAA21004@platon.cs.rhbnc.ac.uk>
X-Authentication-Warning: platon.cs.rhbnc.ac.uk: Host localhost didn't use HELO protocol
To: john@dcs.rhbnc.ac.uk, alex@dcs.rhbnc.ac.uk, pete@dcs.rhbnc.ac.uk,
dave@dcs.rhbnc.ac.uk, anthony@vax.lse.ac.uk, Paul.Vitanyi@cwi.nl,
Esko Ukkonen , orponen@igi.tu-graz.ac.at,
Michel.Cosnard@lip.ens-lyon.fr, maass@igi.tu-graz.ac.at,
cesabian@ghost.dsi.unimi.it, mauri ,
Felipe Cucker , smichaux@vm1.umh.ac.be,
sbruyere@vm1.umh.ac.be, chrmich@SUN1.UMH.AC.BE, sboffa@vm1.umh.ac.be,
meer@rwth-aachen.de, gavalda@lsi.upc.es, balqui@lsi.upc.es,
torras@ic.upc.es, bruf@igi.tu-graz.ac.at, jpd@pip.fpms.ac.be,
ferretti@imiucca.csi.unimi.it, grb10@phx.cam.ac.uk,
hpaugam@lip.ens-lyon.fr, mschmitt@igi.tu-graz.ac.at,
boldi@ghost.dsi.unimi.it, Bernard.Girau@lip.ens-lyon.fr,
Tapio.Elomaa@cs.Helsinki.FI, Patrik.Floreen@cs.Helsinki.FI,
jkivinen@varisluoto.cs.Helsinki.FI, Petri.Myllymaki@cs.Helsinki.FI,
koiran@dimacs.rutgers.edu, pauer@igi.tu-graz.ac.at,
Didier.Puzenat@lip.ens-lyon.fr, Richard.Baron@lip.ens-lyon.fr,
Pascal.Bigot@lip.ens-lyon.fr, castro@lsi.upc.es, buhrman@cwi.nl,
jeroenm@cwi.nl, guijarro@lsi.upc.es, vlavin@lsi.upc.es,
carlos@cs.titech.ac.jp, pdg@cwi.nl, N.L.Biggs@lse.ac.uk,
Herman.Ehrenburg@cwi.nl, gegout@clipper.ens.fr, colt@cs.uiuc.edu,
Connectionists@cs.cmu.edu, neuron-request@cattell.psych.upenn.edu
Subject: Technical Report Series in Neural and Computational Learning
Date: Sat, 25 Mar 95 16:56:08 +0000
X-Mts: smtp
The European Community ESPRIT Working Group in Neural and Computational
Learning Theory (NeuroCOLT): several new reports available
----------------------------------------
NeuroCOLT Technical Report NC-TR-94-018:
----------------------------------------
On the Complexity of Function Learning
by Peter Auer, Technische Universitaet Graz,
Philip M. Long, Duke University,
Wolfgang Maass, Technische Universitaet Graz,
Gerhard J. Woeginger, Technische Universitaet Graz
Abstract:
The majority of results in computational learning theory are concerned
with concept learning, i.e. with the special case of function learning
for classes of functions with range $\{ 0,1 \}$. Much less is known
about the theory of learning functions with a larger range such as N or
R. In particular relatively few results exist about the general
structure of common models for function learning, and there are only
very few nontrivial function classes for which positive learning
results have been exhibited in any of these models.
We introduce in this paper the notion of a binary branching adversary
tree for function learning, which allows us to give a somewhat
surprising equivalent characterization of the optimal learning cost for
learning a class of real-valued functions (in terms of a max-min
definition which does not involve any ``learning'' model).
Another general structural result of this paper relates the cost for
learning a union of function classes to the learning costs for the
individual function classes.
Furthermore, we exhibit an efficient learning algorithm for learning
convex piecewise linear functions from $R^d$ into $R$. Previously, the
class of linear functions from $R^d$ into $R$ was the only class of
functions with multi-dimensional domain that was known to be learnable
within the rigorous framework of a formal model for on-line learning.
Finally we give a sufficient condition for an arbitrary class $\F$ of
functions from $R$ into $R$ that allows us to learn the class of all
functions that can be written as the pointwise maximum of $k$ functions
from $\F$. This allows us to exhibit a number of further nontrivial
classes of functions from $R$ into $R$ for which there exist efficient
learning algorithms.
----------------------------------------
NeuroCOLT Technical Report NC-TR-94-019:
----------------------------------------
Neural Nets with Superlinear VC-Dimension
by Wolfgang Maass, Institute for Theoretical Computer Science,
Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz,
Abstract:
It has been known for quite a while that the Vapnik-Chervonenkis
dimension (VC-dimension) of a feedforward neural net with linear
threshold gates is at most $O(w \cdot \log w)$, where $w$ is the total
number of weights in the neural net. We show in this paper that this
bound is in fact asymptotically optimal. More precisely, we exhibit
for any depth $d\geq 3$ a large class of feedforward neural nets of
depth $d$ with $w$ weights that have VC-dimension $\Omega(w\cdot \log
w)$. This lower bound holds even if the inputs are restricted to
boolean values. The proof of this result relies on a new method that
allows us to encode more ``program-bits'' in the weights of a neural
net than previously thought possible.
----------------------------------------
NeuroCOLT Technical Report NC-TR-94-020:
----------------------------------------
Efficient Agnostic PAC-Learning with Simple Hypotheses
by Wolfgang Maass, Institute for Theoretical Computer Science,
Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz,
Abstract:
We exhibit efficient algorithms for agnostic PAC-learning with
rectangles, unions of two rectangles, and unions of $k$ intervals as
hypotheses. These hypothesis classes are of some interest from the
point of view of applied machine learning, because empirical studies
show that hypotheses of this simple type (in just one or two of the
attributes) provide good prediction rules for various real-world
classification problems. In addition, optimal hypotheses of this type
may provide valuable heuristic insight into the structure of a
real-world classification problem.
The algorithms that are introduced in this paper make it feasible to
compute optimal hypotheses of this type for a training set of several
hundred examples. We also exhibit an approximation algorithm that can
compute near optimal hypotheses for much larger datasets.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-002:
----------------------------------------
Agnostic PAC-Learning of Functions on Analog Neural Nets
by Wolfgang Maass, Institute for Theoretical Computer Science,
Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz,
Austria
Abstract:
We consider learning on multi-layer neural nets with piecewise
polynomial activation functions and a fixed number $k$ of numerical
inputs. We exhibit arbitrarily large network architectures for which
efficient and provably successful learning algorithms exist in the
rather realistic refinement of Valiant's model for probably
approximately correct learning (``PAC-learning'') where no a-priori
assumptions are required about the ``target function'' (agnostic
learning), arbitrary noise is permitted in the training sample, and the
target outputs as well as the network outputs may be arbitrary reals.
The number of computation steps of the learning algorithm LEARN that
we construct is bounded by a polynomial in the bit-length $n$ of the
fixed number of input variables, in the bound $s$ for the allowed
bit-length of weights, in $\frac{1} {\varepsilon}$, where $\varepsilon$
is some arbitrary given bound for the true error of the neural net
after training, and in $\frac{1}{\delta}$ where ${\delta}$ is some
arbitrary given bound for the probability that the learning algorithm
fails for a randomly drawn training sample. However the computation
time of LEARN is exponential in the number of weights of the considered
network architecture, and therefore only of interest for neural nets of
small size.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-003:
----------------------------------------
Perspectives of Current Research about the Complexity of Learning on
Neural Nets
by Wolfgang Maass, Institute for Theoretical Computer Science,
Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz,
Austria
Abstract:
This paper discusses within the framework of computational learning
theory the current state of knowledge and some open problems
in three areas of research about learning on feedforward neural nets:
\begin{itemize}
\item[--]Neural nets that learn from mistakes
\item[--]Bounds for the Vapnik-Chervonenkis dimension of neural nets
\item[--]Agnostic PAC-learning of functions on neural nets.
\end{itemize}
All relevant definitions are given in this paper, and no previous
knowledge about computational learning theory or neural nets is
required.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-005:
----------------------------------------
Simulating Access to Hidden Information while Learning
by Peter Auer, Technische Universit\"{a}t Graz,
Philip M. Long, Duke University
Abstract:
We introduce a new technique which enables a learner without access to
hidden information to learn nearly as well as a learner with access to
hidden information. We apply our technique to solve an open problem of
Maass and Tur\'{a}n, showing that for any concept class $F$, the least
number of queries sufficient for learning $F$ by an algorithm which has
access only to arbitrary equivalence queries is at most a factor of
$1/\log_2 (4/3)$ more than the least number of queries sufficient for
learning $F$ by an algorithm which has access to both arbitrary
equivalence queries and membership queries. Previously known results
imply that the $1/\log_2 (4/3)$ in our bound is best possible. We
describe analogous results for two generalizations of this model to
function learning, and apply those results to bound the difficulty of
learning in the harder of these models in terms of the difficulty of
learning in the easier model. We bound the difficulty of learning
unions of $k$ concepts from a class $F$ in terms of the difficulty of
learning $F$. We bound the difficulty of learning in a noisy
environment for deterministic algorithms in terms of the difficulty of
learning in a noise-free environment. We apply a variant of our
technique to develop an algorithm transformation that allows
probabilistic learning algorithms to nearly optimally cope with noise.
A second variant enables us to improve a general lower bound of
Tur\'{a}n for the PAC-learning model (with queries). Finally, we show
that logarithmically many membership queries never help to obtain
computationally efficient learning algorithms.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-006:
----------------------------------------
A Stop Criterion for the Boltzmann Machine Learning Algorithm
by Berthold Ruf, Technical University Graz
Abstract:
Ackley, Hinton and Sejnowski introduced a very interesting and
versatile learning algorithm for the Boltzmann machine (BM). However it
is difficult to decide when to stop the learning procedure. Experiments
have shown that the BM may destroy previously achieved results when the
learning process is executed for too long. This paper introduces a new
quantity, the conditional divergence, measuring the learning success
for the inputs of the data set. To demonstrate its use, some
experiments are presented, based on the Encoder Problem.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-007:
----------------------------------------
VC-Dimensions for Graphs
by Evangelos Kranakis, Carleton University,
Danny Krizanc, Carleton University,
Berthold Ruf, Technical University Graz,
Jorge Urrutia, University of Ottawa,
Gerhard J. Woeginger, Technical University Graz
Abstract:
We study set systems over the vertex set (or edge set) of some graph
that are induced by special graph properties like clique,
connectedness, path, star, tree, etc. We derive a variety of
combinatorial and computational results on the $\vc$
(Vapnik-Chervonenkis) dimension of these set systems.
For most of these set systems (e.g.\ for the systems induced by trees,
connected sets, or paths), computing the $\vc$-dimension is an
$\np$-hard problem. Moreover, determining the $\vc$-dimension for set
systems induced by neighborhoods of single vertices is complete for the
class $\lognp$. In contrast to these intractability results, we show
that the $\vc$-dimension for set systems induced by stars is computable
in polynomial time. For set systems induced by paths or cycles, we
determine the extremal graphs $G$ with the minimum number of edges such
that $\vc_{{\cal P}}(G)\ge k$. Finally, we show a close relation
between the $\vc$-dimension of set systems induced by connected sets of
vertices and the $\vc$ dimension of set systems induced by connected
sets of edges; the argument is done via the line graph of the
corresponding graph.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-008:
----------------------------------------
Computing the Maximum Bichromatic Discrepancy, with applications to
Computer Graphics and Machine Learning
by David P. Dobkin, Princeton University,
Dimitrios Gunopulos, Princeton University,
Wolfgang Maass, Technische Universitaet Graz,
Abstract:
Computing the maximum bichromatic discrepancy is an interesting
theoretical problem with important applications in computational
learning theory, computational geometry and computer graphics. In this
paper we give algorithms to compute the maximum bichromatic discrepancy
for simple geometric ranges, including rectangles and halfspaces. In
addition, we give extensions to other discrepancy problems.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-009:
----------------------------------------
A Finite Automaton Learning System using Genetic Programming
by Herman Ehrenburg, CWI,
Jeroen van Maanen, CWI
Abstract:
This report describes the Finite Automaton Learning System (FALS), an
evolutionary system that is designed to find small digital circuits that
duplicate the behaviour of a given finite automaton. FALS is developed
with the aim to get a better insight in learning systems. It is also
targeted to become a general purpose automatic programming system.
The system is based on the genetic programming approach to evolve
programs for tasks instead of explicitly programming them. A
representation of digital circuits suitable for genetic programming is
given as well as an extended crossover operator that alleviates the need
to specify an upper bound for the number of states in advance.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-010:
----------------------------------------
On Specifying Boolean Functions by Labelled Examples
by Martin Anthony, London School of Economics,
Graham Brightwell, London School of Economics,
John Shawe-Taylor, Royal Holloway, University of London
Abstract:
We say a function $t$ in a set $H$ of $\{0,1\}$-valued functions
defined on a set $X$ is {\it specified} by $S \subseteq X$ if the only
function in $H$ which agrees with $t$ on $S$ is $t$ itself. The {\it
specification number} of $t$ is the least cardinality of such an $S$.
For a general finite class of functions, we show that the specification
number of any function in the class is at least equal to a parameter
from~\cite{RS} known as the testing dimension of the class. We
investigate in some detail the specification numbers of functions in
the set of linearly separable Boolean functions of $n $
variables---those functions $f$ such that $f^{-1}(\{0\})$ and
$f^{-1}(\{1\})$ can be separated by a hyperplane. We present general
methods for finding upper bounds on these specification numbers and we
characterise those functions which have largest specification number.
We obtain a general lower bound on the specification number and we show
that for all {\it nested} functions, this lower bound is attained. We
give a simple proof of the fact that for any linearly separable Boolean
function, there is exactly one set of examples of minimal cardinality
which specifies the function. We discuss those functions which have
limited dependence, in the sense that some of the variables are
redundant (that is, there are irrelevant attributes), giving tight
upper and lower bounds on the specification numbers of such functions.
We then bound the average, or expected, number of examples needed to
specify a linearly separable Boolean function. In the final section of
the paper, we address the complexity of computing specification numbers
and related parameters.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-012:
----------------------------------------
On the relations between discrete and continuous complexity theory
by Klaus Meer, RWTH Aachen
Abstract:
Relations between discrete and continuous complexity models are
considered. The present paper is devoted to combine both models. In
particular we analyze the 3-Satisfiability problem. The existence of
fast decision procedures for this problem over the reals is examined
based on certain conditions on the discrete setting. Moreover we study
the behaviour of exponential time computations over the reals depending
on the real complexity of 3-Satisfiability. This will be done using
tools from complexity theory over the integers.
----------------------------------------
NeuroCOLT Technical Report NC-TR-95-014:
----------------------------------------
Grundlagen der reellen Komplexit\"atstheorie
by Klaus Meer, RWTH Aachen
Abstract: (in English - text is in German)
Complexity theory deals with the question of classifying mathematical
problems according to the difficulty they provide for algorithmic
solutions. This is generally related to
\begin{itemize}
\item finding efficient solution-algorithms,
\item analyzing structural properties which make problems difficult to
solve and
\item comparing problems.
\end{itemize}
Contrary to the situation in classical complexity theory the real
approach is interested in studying problems defined on continuous
structures. Starting point for the present lecture notes will be the
model of a real Turing-machine as it was introduced 1989 by Blum, Shub,
and Smale. We will begin with a formal definition of notions like
computability, decidability and efficiency. This gives rise to consider
the complexity classes $P_{\R}$ and $NP_{\R}$. After analyzing basic
properties (reducibility, $NP_{\R}-$completeness,existence of complete
problems) we'll care about decidability of problems in class
$NP_{\R}$. To this aim results on quantifier elimination and on the
structure of semialgebraic sets are investigated. Finally, methods for
proving lower bounds are presented. For this purpose we show a real
version of Hilbert's Nullstellensatz.
Table of contents:
0. Introduction
1. The computational model of Blum, Shub, and Smale
2. Complexity theory for the BSS-model
3. Existential theory over the reals
4. Lower bounds
References
-----------------------
The Report NC-TR-94-018 can be accessed and printed as follows
% ftp cscx.cs.rhbnc.ac.uk (134.219.200.45)
Name: anonymous
password: your full email address
ftp> cd pub/neurocolt/tech_reports
ftp> binary
ftp> get nc-tr-94-018.ps.Z
ftp> bye
% zcat nc-tr-94-018.ps.Z | lpr -l
Similarly for the other technical report.
Uncompressed versions of the postscript files have also been
left for anyone not having an uncompress facility.
A full list of the currently available Technical Reports in the
Series is held in a file `abstracts' in the same directory.
The files may also be accessed via WWW starting from the NeuroCOLT homepage:
http://www.dcs.rhbnc.ac.uk/neurocolt.html
Best wishes
John Shawe-Taylor