Skip to main content
Log in

Comparing Structures Using a Hopfield-Style Neural Network

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Labeled graphs are an appropriate and popular representation of structured objects in many domains. If the labels describe the properties of real world objects and their relations, finding the best match between two graphs turns out to be the weakly defined, NP-complete task of establishing a mapping between them that maps similar parts onto each other preserving as much as possible of their overall structural correspondence. In this paper, former approaches of structural matching and constraint relaxation by spreading activation in neural networks and the method of solving optimization tasks using Hopfield-style nets are combined. The approximate matching task is reformulated as the minimization of a quadratic energy function. The design of the approach enables the user to change the parameters and the dynamics of the net so that knowledge about matching preferences is included easily and transparently. In the last section, some examples demonstrate the successful application of this approach in classification and learning in the domain of organic chemistry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. D.W. Aha, “Relating relational learning algorithms,” in Inductive Logic Programming, edited by S. Muggleton, Academic Press: London, 1992.

    Google Scholar 

  2. David W. Aha and Dietrich Wettschereck, “Case-based learning: Beyond classification of feature vectors,” in Proc. of the ECML-97 Workshop, 1997.

  3. E. Bienenstock, “Relational models in natural and artificial intelligence,” in Neural Computers, edited by R. Eckmiller and C. v.d.Malsburg, Springer, pp. 61-70, 1989.

  4. A. Sperduti and A. Starita, “Supervised neural networks for the classification of structures,” IEEE Transactions on Neural Networks, vol. 8,no. 3, 1997.

  5. C. Goller and A. Küchler, “Learning task-dependent distributed representations by backpropagation through structure,” in Proceedings of the International Conference on Neural Networks (ICNN-96), IEEE, June 1996, pp. 347-352.

  6. Gadi Pinkas, “Propositional logic, nonmonotonic reasoning and symmetric networks-on bridging the gap between symbolic and connectionist knowledge representation,” in Neural Networks for Knowledge Representation and Inference, edited by D.S. Levine and M. Aparicio IV, Lawrence Erlbaum, chap. 7, pp. 175-204, 1994.

  7. A. Jagota, “Representing discrete structures in a hopfield-style,” in Neural Networks for Knowledge Representation and Inference, edited by D.S. Levine and M. Aparicio IV, Lawrence Erlbaum, chap. 5, pp. 123-142, 1994.

  8. M. Johnson, “Chapter 2: Some constraints on embodies analogical understanding,” in Analogical Reasoning. Perspectives of Artificial Intelligence, Cognitive Science and Philosophy, edited by D.H. Helman, Kluwer Academic Publishers: Dordrecht, Boston, London, 1988.

    Google Scholar 

  9. R.L. Goldstone, “Similarity, interactive activation, and mapping,” Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 20, pp. 3-28, 1994.

    Google Scholar 

  10. M. Johnson, “Similarity-based methods for predicting chemical and biological properties: A brief overview from a statistical perspective,” in Chemical Information Systems: Beyond the Structure Diagram, edited by D. Bawden and E. Mitchell, Ellis Horwood: New York, pp. 149-159, 1990.

    Google Scholar 

  11. B. Zelinka, “On a certain distance between isomorphism classes of graphs,” Časopis pro pěstováni matematiky, vol. 100, pp. 371-373, 1975.

    Google Scholar 

  12. H. Bunke and B.T. Messmer, “Similarity measures for structured representations,” in Topics in Case-Based Reasoning. First European Workshop EWCBR-93, edited by S. Wess, K. Althoff, and M.M. Richter, number 837 in Lecture Notes in Artificial Intelligence, Springer: Heidelberg, pp. 106-118, 1993.

    Google Scholar 

  13. L.G. Shapiro and R.M. Haralick, “Structural descriptions and inexact matching,” IEEE Trans. Pattern Anal. Machine Intell., vol. 3, pp. 504-519, 1981.

    Google Scholar 

  14. L.G. Shapiro and R.M. Haralick, “A metric for comparing relational descriptions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 7,no. 1, pp. 90-94, 1985.

    Google Scholar 

  15. M.A. Eshera and K.S. Fu, “A graph distance measure for image analysis,” SMC, vol. 14,no. 3, pp. 398-408, May 1984.

    Google Scholar 

  16. A. Voß, “Similarity concepts and retrieval methods,” FABEL Report 13, GMD, Sankt Augustin, 1994.

    Google Scholar 

  17. L. Cinque, D. Yasuda, L.G. Shapiro, S. Tanimoto, and B. Allen, “An improved algorithm for relational distance graph matching,” Pattern Recognition, vol. 29,no. 2, pp. 349-359, February 1996.

    Google Scholar 

  18. W. Emde and D. Wettschereck, “Relational instance-based learning,” in Proc. of the 13th International Conference on Machine Learning, edited by L. Saitta, Morgan Kaufmann, pp. 122-130, 1996.

  19. G. Bisson, “Learning in FOL with a similarity measure,” in Proc. of the Tenth National Conference on Artificial Intelligence AAAI-92, AAAI Press/The MIT Press: Menlo Park, Cambridge, London, 1992, pp. 82-87.

    Google Scholar 

  20. P. Valtchev and J. Euzenat, “Dissimilarity measure for collections of objects and values,” in Advances in Intelligent Data Analysis: Reasoning about Data. Proc. of the Second International Symposium, IDA-97, edited by P. Cohen, X. Liu, and M. Berthold, number 1280 in Lecture Notes in Computer Science, Springer: Berlin, Heidelberg, New York, pp. 259-272, 1997.

    Google Scholar 

  21. K. Schädler, “Ahnlichkeit von Graphen,” Technical report, TU Berlin, to appear.

  22. J. Kobler, U. Schoning, and J. Toran, The Graph Isomorphism Problem: Its Structural Complexity, Birkhauser: Boston, 1993.

    Google Scholar 

  23. Barrow and Burstall, “Subgraph isomorphism, matching relational structures and maximal cliques,” Information Processing Letters, vol. 4,no. 4, pp. 83-84, 1976.

    Google Scholar 

  24. D.S. Johnson and M.A. Trick (Eds.), “Cliques, coloring and satisfiability,” DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 26, American Mathematical Society, 1996.

  25. J.T.L. Wang, K. Zhang, and G.W. Chirn, “The approximate graph matching problem,” in International Conference on Pattern Recognition 1994, vol. B, 1994, pp. 284-288.

    Google Scholar 

  26. M.A. Eshera and K.S. Fu, “An image understanding system using attributed symbolic representation and inexact graphmatching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8,no. 5, pp. 604-618, September 1986.

    Google Scholar 

  27. F. Depiero, M. Trivedi, and S. Serbin, “Graph matching using a direct classification of node attendance,” PR, vol. 29,no. 6, pp. 1031-1048, June 1996.

    Google Scholar 

  28. Y. Elsonbaty and M.A. Ismail, “A new algorithm for subgraph optimal isomorphism,” Pattern Recognition, vol. 31,no. 2, pp. 205-218, February 1998.

    Google Scholar 

  29. D.E. Rumelhart and J.L. McClelland (Eds.), Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Foundations, MIT Press, vol. 1, 1986.

  30. T.R. Johson and J. Zhang, “A hybrid learning model of abductive reasoning,” in The Working Notes of the IJCAI-95 Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches, edited by R. Sun and F. Alexandre, pp. 12-17, 1995.

  31. B. Falkenhainer, K.D. Forbus, and D. Gentner, “The structure-mapping engine: Algorithm and examples,” Artificial Intelligence, vol. 41, pp. 1-63, 1989.

    Google Scholar 

  32. K.J. Holyoak and P.R. Thagard, “Analogical mapping by constraint satisfaction,” Cognitive Science, vol. 13, pp. 295-355, 1989.

    Google Scholar 

  33. H.M. Adorf and M.D. Johnston, “A discrete stochastic neural network algorithm for constraint satisfaction problems,” in Proc. of the International Joint Conference on Neural Networks, 1990.

  34. H.W. Güsgen, “Connectionist networks for constraint satisfaction,” in Proc. of the ISMM International Conference on Parallel and Distributed Computing, and Systems, 1990, pp. 12-16.

  35. H.W. Güsgen and Joachim Hertzberg, A Perspective of Constraint-Based Reasoning. An Introductory Tutorial, Number 597 in Lecture Notes in Artificial Intelligence, Springer: Berlin, Heidelberg, New York, 1992.

    Google Scholar 

  36. J.A. Feldman, Four Frames Suffice: A Previsionary Model of Vision and Space, TR 99, Computer Science Department, University of Rochester, September 1982.

  37. S.Z. Li, Markov Random Field Modeling in Computer Vision, Springer, 1995.

  38. S.Z. Li, “Improving convergence and solution quality of hopfield-type neural networks with augmented lagrange multipliers,” IEEE Transactions on Neural Networks, vol. 7,no. 6, November 1996.

  39. S.Z. Li, H. Wnag, K.L. Chan, and M. Petrou, “Energy minimization and relaxation labeling,” Journal of Mathematical Imaging and Vision, vol. 7, 1997.

  40. V. Tresp and G. Gindi, “Invariant object recognition by inexact subgraph matching with applications in industrial part recognition,” in Proceedings INNC (Int. Neural Network Conf.), Paris, 9–13 July 90, Kluwer, vol. 1, July 1990.

  41. E.K. Teoh, P.N. Suganthan, and D.P. Mital, “Optimal mapping of graph homomorphism onto self-organising hopfield network,” Image and Vision Computing, vol. 15,no. 9, pp. 679-694, 1997.

    Google Scholar 

  42. J.J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” in Proceedings of the National Academy of Sciences USA 81, 1984, pp. 3088-3092.

  43. J. Hopfield and D. Tank, “Neural computations of decisions in optimization problems,” Biological Cybernetics, vol. 52, pp. 141-152, 1986.

    Google Scholar 

  44. Y. Akiyama, A. Yamashita, M. Kajiura, and H. Aiso, “Combinatorial optimization with gaussian machines,” in IEEE Int. Conf. on Neural Networks, vol. I, 1989, pp. 533-540.

    Google Scholar 

  45. A. Jagota, “The hopfield-style network as a maximal-clique graph machine,” Technical Report TR 90-25, Department of Computer Science, State University of New York At Buffalo, Buffalo, September 1990.

    Google Scholar 

  46. A. Jagota, “Efficiently approximating MAXCLIQUE in a hopfield-style network,” in Proceedings of International Joint Conference on Neural Networks '92 Volume II, 1992, pp. 248-253.

    Google Scholar 

  47. N. Funabiki, Y. Takefuji, and K.-C. Lee, “A neural network model for finding a near-maximal clique,” Journal of Parallel and Distributed Computing, vol. 14,no. 3, pp. 340-344, March 1992.

    Google Scholar 

  48. A. Rangarajan and E. Mjolsness, “A Lagrangian relaxation network for graph matching,” in IEEE Intl. Conf. on Neural Networks (ICNN), IEEE Press, vol. 7, 1994, pp. 4629-4634.

  49. D.L. Tsioutsias and E. Mjolsness, “A multiscale attentional framework for relaxation neural networks,” Technical Report TR CS 95-464, Dept. of Computer Science and Engineering, University of California San Diego, 1995.

  50. S. Gold and A. Rangarajan, “Softmax to softassign: Neural network algorithms for combinatiorial optimization,” Journal of Artificial Neural Networks, pp. 787-804, May 1996.

  51. A. Rangarajan, S. Gold, and E. Mjolsness, “A novel optimizing network architecture with applications,” Neural Computation, vol. 8,no. 5, pp. 1044-1080, 1996.

    Google Scholar 

  52. A. Rangarajan, “Self annealing: Unifying deterministic annealing and relaxation labeling,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. Proc. International Workshop EMMCVPR'97, Venice, Italy, May 1997, edited by M. Pelillo and E.R. Hancock, number 1223 in LNCS, Springer, pp. 229-244, 1997.

  53. P.N. Suganthan, E.K. Teoh, and D.P. Mital, “Pattern-recognition by homomorphic graph matching using hopfield neural networks,” Image and Vision Computing, vol. 13,no. 1, pp. 45-60, February 1995.

    Google Scholar 

  54. P.N. Suganthan, E.K. Teoh, and D.P. Mital, “Self-organizing hopfield network for attributed relational graph matching,” Image and Vision Computing, vol. 13,no. 1, pp. 61-73, February 1995.

    Google Scholar 

  55. J. Ramanujam and P. Sadayappan, “Optimization by neural networks,” in IEEE International Conference on Neural Networks, vol. II, New York, 1988, pp. 325-332.

    Google Scholar 

  56. J. Bruck and Joseph W. Goodman, “On the power of neural networks for solving hard problems,” Journal of Complexity, vol. 6, pp. 129-135, 1990.

    Google Scholar 

  57. L.I. Burke and J.P. Ignizio, “Neural networks and operations research: An overview,” Computers Ops. Res., vol. 19,nos. 3/4, pp. 179-189, 1992.

    Google Scholar 

  58. C. Looi, “Neural network methods in combinatorial optimization,” Computers and Operations Research, vol. 19,nos. 3/4, pp. 191-208, 1992.

    Google Scholar 

  59. J. Wang, “Progress in neural networks,” Deterministic Neural Networks for Combinatiorial Optimization, Ablex Publishing Corporation: Norwood, New Jersey, vol. 3, chap. 11, pp. 319-340, 1995.

    Google Scholar 

  60. K. Schädler and F. Wysotzki, “Theoretical foundations of a special neural net approach for graphmatching,” Technical Report 96-26, TU Berlin, CS Dept., 1996.

  61. Mordecai Avriel, Nonlinear Programming: Analysis and Methods, Prentice-Hall, 1976.

  62. D.S. Levine and S.J. Leven (Eds.), Motivation, Emotion, and Goal Direction in Neural Networks, Lawrence Erlbaum Associatives: Hillsdale, CA, 1992.

    Google Scholar 

  63. J.A. Feldman, M.A. Fanty, N. Goddard, and K. Lynne, “Computing with Structured Connectionist Networks,” TR 213, Department of Computer Science, University of Rochester, April 1987.

  64. F. Wysotzki, “Artificial intelligence and artificial neural nets,” in Proc. 1st Workshop on AI, Shanghai, TU Berlin and Jiao Tong, University Shanghai, September 1990.

    Google Scholar 

  65. K. Schädler and F. Wysotzki, “Application of a neural net in classification and knowledge discovery,” in Proc. ESANN'98, edited by M. Verleysen, D-Facto, pp. 117-122, 1998.

  66. K. Schädler and F. Wysotzki, “A connectionist approach to structural similarity determination as a basis of clustering, classification and feature detection,” in Principles of Data Mining and Knowledge Discovery. Proc. of the 1st European Symposium on the Principles of Data Mining and Knowledge Discovery, Trondheim, edited by J. Komorowski and J. Zytkow, Springer, pp. 254-264, 1997.

  67. K. Schädler and F. Wysotzki, “A connectionist approach to distance-based analysis of relational data,” in Advances in Intelligent Data Analysis. Reasoning about Data. Proc. of the IDA-97, edited by X. Liu, P. Cohen, and M. Berthold, Berlin Heidelberg: New York, pp. 137-148, 1997.

  68. S. Muggleton, “Inverse entailment and progol,” New Generation, Computing, May 1995.

  69. D. Aha, D. Kibler, and M.K. Albert, “Instance-based learning algorithms,” Machine Learning, vol. 6, pp. 37-66, 1991.

    Google Scholar 

  70. C.G. Atkeson, A.W. Moore, and S. Schaal, “Locally weighted learning,” AI Review, Special Issue on Lazy Learning, vol. 11, 1996.

  71. B.W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman and Hall: London, New York, 1986.

    Google Scholar 

  72. D.G. Lowe, “Similarity metric learning for a variable-kernel classifier,” Technical Report UBC-TR-93-43, Computer Science Dept., University of British Columbia, Vancouver, B.C., V6T 1Z4, Canada, November 1993.

    Google Scholar 

  73. R.S. Michalski and R.E. Stepp, “Learning from observation: Conceptual clustering,” in Machine Learning: An Artificial Intelligence Approach, edited by R.S. Michalski, J.G. Carbonell, and T.M. Mitchell, Springer: Berlin, vol. 1, chap. 11, pp. 331-364, 1984.

    Google Scholar 

  74. M. Lebowitz, “Experiments with incremental concept formation: Unimem,” Machine Learning, vol. 2,no. 2, pp. 103-138, 1987.

    Google Scholar 

  75. J.H. Gennari, P. Langley, and D. Fisher, “Models of Incremental Concept Formation,” Artificial Intelligence, vol. 40, pp. 11-61, 1989.

    Google Scholar 

  76. G. Bisson, “Conceptual clustering in a first order logic representation,” in Proc. of the Tenth European Conference on Artificial Intelligence ECAI-92, edited by B. Neumann, John Wiley & Sons, Ltd., Chichester, pp. 458-462, 1992.

    Google Scholar 

  77. P. Geibel and F. Wysotzki, “A logical framework for graph theoretical decision tree learning,” in Proceedings of the ILP-97, edited by N. Lavrac and S. Dzeroski, 1997.

  78. M.M. Richter, “On the notion of similarity in case-based reasoning,” in Mathematical and Statistical Methods in Artificial Intelligence, edited by G. del Viertl, Springer, pp. 171-184, 1995.

  79. K.P. Jantke, “Nonstandard concepts of similarity in case-based reasoning,” in Information Systems and Data Analysis: Prospects-Foundations-Applications. Proceedings of the 17th Annual Conference of the GfKl, edited by W. Lenski, H.-H. Bock, and M.M. Richter, Studies in Classification, Data Analysis, and Knowledge Organization, Univ. of Kaiserslautern, Springer, pp. 28-43, 1994.

  80. K. Börner, E. Pippig, E.-CH. Tammer, and Carl-H. Coulon, “Structural similarity and adaptation,” in Advances in Case-Based Reasoning. Proc. of the Third European Workshop on Case-Based Reasoning EWCBR-96, edited by I. Smith and B. Faltings, Springer, pp. 58-77, 1996.

  81. K. Schädler, U. Schmid, B. Machenschalk, and H. Lübben, “A neural net for determining structural similarity of recursive programs,” in Proc. of the German Workshop of Case Based Reasoning, edited by R. Bergmann and W. Wilke, Technical Report, University of Kaiserslautern, Zentrum fuer Lernende Systeme und Anwendungen LSA-97-01E, Kaiserslautern, pp. 199-206, 1997.

    Google Scholar 

  82. U. Schmid and F. Wysotzki, “Induction of recursive program schemes,” in Proc. of the 10th European Conference on Machine Learning, 1998.

  83. P. Geibel and F. Wysotzki, “Learning relational concepts with decision trees,” in Machine Learning: Proceedings of the Thirteenth International Conference, edited by L. Saitta, Morgan Kaufmann Publishers: San Fransisco, CA, pp. 160-174, 1996.

    Google Scholar 

  84. R.D. King, M.J.E. Sternberg, A. Srinivasan, and S.H. Muggleton, “Knowledge Discovery in a Database of Mutagenic Chemicals,” in Proceedings of the Workshop “Statistics, Machine Learning and Discovery in Databases” at the ECML-95, 1995.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Schädler, K., Wysotzki, F. Comparing Structures Using a Hopfield-Style Neural Network. Applied Intelligence 11, 15–30 (1999). https://doi.org/10.1023/A:1008320413168

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008320413168

Navigation