default search action
Neural Networks, Volume 6
Volume 6, Number 1, 1993
- Stephen Grossberg, John G. Taylor:
The fifth anniversary of neural networks. 1
- Kurt Hornik, Maxwell B. Stinchcombe, Halbert White:
Letters to the editor. 3 - Vladik Kreinovich:
Letters to the editor. 3-4 - Terry Bossomaier, Natalina Isidoro, Adrian Loeff:
Errors from grid approximation of IFS codes. 5-6
- Marc M. Van Hulle, Tom Tollenaere:
A modular artificial neural network for texture processing. 7-32 - Kunihiko Fukushima, Taro Imagawa:
Recognition and segmentation of connected characters with selective attention. 33-41 - Stephen Grossberg, Frank H. Guenther, Daniel Bullock, Douglas N. Greve:
Neural representations for sensory-motor control, II: Learning a head-centered visuomotor representation of 3-D target position. 43-67 - Jeffrey Hoffman, Josef Skrzypek, Jacques J. Vidal:
Cluster network for recognition of handwritten, cursive script characters. 69-78 - Andrew H. Gee, Sreeram V. B. Aiyer, Richard W. Prager:
An analytical framework for optimizing neural networks. 79-97 - Sam-Kit Sin, Rui J. P. de Figueiredo:
Efficient learning procedures for optimal interpolative nets. 99-113 - Masahiko Morita:
Associative memory with nonmonotone dynamics. 115-126 - Friedrich Biegler-König, Frank Bärmann:
A learning algorithm for multilayered neural networks based on linear least squares problems. 127-131 - Kevin N. Gurney:
Training nets of stochastic units using system identification. 133-145
- Conferences on neural networks and related topics. 151-159
Volume 6, Number 2, 1993
- Shun-ichi Amari:
A universal theorem on learning curves. 161-166 - Shuji Yoshizawa, Masahiko Morita, Shun-ichi Amari:
Capacity of associative memory using a nonmonotonic neuron model. 167-176 - Rüdiger W. Brause:
The error-bounded descriptional complexity of approximation networks. 177-187 - John J. Shynk, Neil J. Bershad:
Stationary points of a single-layer perceptron for nonseparable data models. 189-202 - Kevin T. Judd, Kazuyuki Aihara:
Pulse propagation networks: A neural network model that uses temporal coding by action potentials. 203-215 - Koji Nakajima, Yoshihiro Hayakawa:
Correct reaction neural network. 217-222 - Albert Y. Zomaya, Tarek M. Nabhan:
Centralized and decentralized neuro-adaptive robot controllers. 223-244 - Haluk Ögmen:
A neural theory of retino-cortical dynamics. 245-273 - Michael Sabourin, Amar Mitiche:
Modeling and classification of shape using a Kohonen associative memory with selective multiresolution. 275-283 - Christopher Ting, Keng-Chee Chuang:
An adaptive algorithm for neocognitron to recognize analog images. 285-299
Volume 6, Number 3, 1993
- Mats Bengtsson:
A neural system as a dynamical model for early vision. 313-325 - Haruo Kobayashi, Takashi Matsumoto, Tetsuya Yagi, Takuji Shimmi:
Image processing regularization filters on layered architecture. 327-350 - Thierry Denoeux, Régis Lengellé:
Initializing back propagation networks with prototypes. 351-363 - Alessandro Sperduti, Antonina Starita:
Speed up learning and network optimization with extended back propagation. 365-383 - Charles W. Lee:
Learning in neural networks by using tangent planes to constraint surfaces. 385-392 - Lars Kai Hansen:
Stochastic linear learning: Exact test and training error averages. 393-396 - Mohamad T. Musavi, K. Kalantri, Wahid Ahmed, Khue Hiang Chan:
A minimum error neural network (MNN). 397-407 - Jörgen M. Karlholm:
Associative memories with short-range, higher order couplings. 409-421 - John G. Taylor, Stephen Coombes:
Learning higher order correlations. 423-427 - Martin Brown, Chris J. Harris, Patrick C. Parks:
The interpolation capabilities of the binary CMAC. 429-440 - Geoffrey J. Chappell, John G. Taylor:
The temporal Kohönen map. 441-445
Volume 6, Number 4, 1993
- Stephen Grossberg:
A solution of the figure-ground problem for biological vision. 463-483
- Hiroaki Gomi, Mitsuo Kawato:
Recognition of manipulated objects by motor learning with modular architecture networks. 485-497 - Gyöngyi Gaál:
Population coding by simultaneous activities of neurons in intrinsic coordinate systems defined by their receptive field weighting functions. 499-515 - Wolfram Schiffmann, H. Willi Geffers:
Adaptive control of dynamic systems by back propagation networks. 517-524 - Martin Fodslette Møller:
A scaled conjugate gradient algorithm for fast supervised learning. 525-533 - Asim Roy, Lark Sang Kim, Somnath Mukhopadhyay:
A polynomial time algorithm for the construction and training of a class of multilayer perceptrons. 535-545 - Ulrich Ramacher:
Hamiltonian dynamics of neural networks. 547-557 - Giancarlo Parodi, Sandro Ridella, Rodolfo Zunino:
Using chaos to generate keys for associative noise-like coding memories. 559-572 - Gregory Allen Kohring:
On the Q-state neuron problem in attractor neural networks. 573-581 - Stevan V. Odri, Dusan P. Petrovacki, Gordana A. Krstonosic:
Evolutional development of a multilevel neural network. 583-595 - Conferences on neural networks and related topics. 597-606
Volume 6, Number 5, 1993
- Dale A. Brown:
Letters to the editor. 607-608 - Alexander Korn:
Letters to the editor. 608 - David G. Stork:
Letter to the editor. 609 - Nico Weymaere:
Letter to the editor. 611 - Arjen van Ooyen, Bernard Nienhuis:
Response to letter by N. Weymaere. 611-612
- Terry M. Caelli, David McG. Squire, Tom P. J. Wild:
Model-based neural networks. 613-625 - Lei Xu:
Least mean square error reconstruction principle for self-organizing neural-nets. 627-648 - Pascal Koiran:
On the complexity of approximating mappings using feedforward networks. 649-653 - Michel Benaïm:
The "off line learning approximation" in continuous time neural networks: An adiabatic theorem. 655-665 - Frank E. McFadden, Yun Peng, James A. Reggia:
Local conditions for phase transitions in neural networks with variable connection strengths. 667-676 - Theodore A. Burton:
Averaged neural networks. 677-680 - Harald Englisch, Yegao Xiao, Kailun Yao:
Strongly diluted networks with selfinteraction. 681-688 - Haruhisa Takahashi, Etsuji Tomita, Tsutomu Kawabata:
Separability of internal representations in multilayer perceptrons with application to learning. 689-703 - J. M. Minor:
Parity with two layer feedforward nets. 705-707 - Katsunori Shimohara, Tadasu Uchiyama, Yukio Tokunaga:
Subconnection neural network for event-driven temporal sequence processing. 709-718 - Youngjik Lee, Sang-Hoon Oh, Myung Won Kim:
An analysis of premature saturation in back propagation learning. 719-728 - Shigeo Abe, Masahiro Kayama, Hiroshi Takenaga, Tadaaki Kitamura:
Extracting algorithms from pattern classification neural networks. 729-735
- Conferences on neural networks and related topics. 741-752
Volume 6, Number 6, 1993
- Brendan L. Rogers:
New conditioned stimulus trace circuit for the rabbit's nictitating membrane response. 753-769 - William Finnoff, Ferdinand Hergert, Hans-Georg Zimmermann:
Improving model selection by nonconvergent methods. 771-783 - Hubertus M. A. Andree, Gerard T. Barkema, Wim Lourens, Arie Taal, Jos C. Vermeulen:
A comparison study of binary feedforward neural networks and digital circuits. 785-790 - Faouzi Bouslama, Akira Ichikawa:
Application of neural networks to fuzzy control. 791-799 - Ken-ichi Funahashi, Yuichi Nakamura:
Approximation of dynamical systems by continuous time recurrent neural networks. 801-806 - Thierry Catfolis:
A method for improving the real-time recurrent learning algorithm. 807-821 - Mark D. Plumbley:
Efficient information transfer and anti-Hebbian neural networks. 823-833 - Pierre Courrieu:
A convergent generator of neural networks. 835-844 - Ali A. Minai, Ronald D. Williams:
On the derivatives of the sigmoid. 845-853 - Masahiko Arai:
Bounds on the number of hidden units in binary-valued three-layer neural networks. 855-860 - Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, Shimon Schocken:
Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. 861-867 - Shashank K. Mehta, Laszlo Fulop:
An analog neural network to solve the hamiltonian cycle problem. 869-881
- Conferences on neural networks and related topics. 883-893
Volume 6, Number 7, 1993
- Teuvo Kohonen:
Physiological interpretation of the Self-Organizing Map algorithm. 895-905 - Malcolm R. J. McQuoid:
Neural ensembles: Simultaneous recognition of multiple 2-D visual objects. 907-917 - Yasuhiro Wada, Mitsuo Kawato:
A neural network model for arm trajectory formation using forward and inverse dynamics models. 919-932 - Hiroaki Gomi, Mitsuo Kawato:
Neural network control for a closed-loop System using Feedback-error-learning. 933-946 - Bruce P. Graham, Stephen J. Redman:
Dynamic behaviour of a model of the muscle stretch reflex. 947-962 - Shigetoshi Nara, Peter Davis, Hiroo Totsuji:
Memory search using complex dynamics in a recurrent neural network model. 963-973 - Francesca Albertini, Eduardo D. Sontag:
For neural networks, function determines form. 975-990 - Abhay B. Bulsari:
Some analytical solutions to the general approximation problem for feedforward neural networks. 991-996 - Nicholas J. Redding, Adam Kowalczyk, Tom Downs:
Constructive higher-order network that is polynomial time. 997-1010 - Margit Kinder, Wilfried Brauer:
Classification of trajectories - Extracting invariants with a neural network. 1011-1017 - Jan Mielniczuk, Joanna Tyrcha:
Consistency of multilayer perceptron regression estimators. 1019-1022 - Enrico A. Carrara, Franco Pagliari, Claudio Nicolini:
Neural networks for the peak-picking of nuclear magnetic resonance spectra. 1023-1032
- Donald F. Specht:
The general regression neural network - Rediscovered. 1033-1034 - Henrik Schiøler:
Response to letter by D. F. Specht. 1034
Volume 6, Number 8, 1993
- Menashe Dornay, Ferdinando A. Mussa-Ivaldi, Joseph McIntyre, Emilio Bizzi:
Stability constraints for the distributed control of motor behavior. 1045-1059
- Hidetoshi Nishimori, Ioan Opris:
Retrieval process of an associative memory with a general input-output function. 1061-1067 - Kurt Hornik:
Some new results on neural network approximation. 1069-1072 - Jun Wang, Behnam Malakooti:
Characterization of training errors in supervised learning using gradient-based rules. 1073-1087 - David A. Sprecher:
A universal mapping for kolmogorov's superposition theorem. 1089-1094 - Tadashi Masuda:
Model of competitive learning based upon a generalized energy function. 1095-1103 - Steve G. Romaniuk, Lawrence O. Hall:
Divide and Conquer Neural Networks. 1105-1116 - Brian A. Telfer, David P. Casasent:
Minimum-cost associative processor for piecewise-hyperspherical classification. 1117-1130 - Xinhua Zhuang, Yan Huang, Su-Shing Chen:
Better learning for bidirectional associative memory. 1131-1146 - Dietmar Ruwisch, Mathias Bode, Hans-Georg Purwins:
Parallel hardware implementation of Kohonen's algorithm with an active medium. 1147-1157 - Nikola Samardzija:
Applications of information storage matrix neural networks. 1159-1167 - Yeou-Fang Wang, Jose B. Cruz Jr., James H. Mulligan Jr.:
Multiple training concept for back-propagation neural networks for use in associative memories. 1169-1175
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.