CDNA06379ENC_001
Comments
Description
Gerald Musgrave EditorComputer-Aided Design , of digital electronic circuits and systems North-Holland for the Commission of the European Communities COMPUTER-AIDED DESIGN of digital electronic circuits and systems organized by The Commission of the European Communities Directorate-General for Internal Market and Industrial Affairs NORTH-HOLLAND PUBLISHING COMPANY-AMSTERDAM «NEW YORK »OXFORD November 1978 edited by Gerald MUSGRAVE Brunei University Uxbridge.COMPUTER-AIDED DESIGN of digital electronic circuits and systems Proceedings of a Symposium Brussels. U. 1979 NORTH-HOLLAND PUBLISHING COMPANY -AMSTERDAM · NEW YORK · OXFORD .K. Middlesex. 1979 All rights reserved.* ECSC. 10017 for The Commission of the European Communities. EEC. Brussels and Luxembourg.Y. PRINTED IN THE NETHERLANDS . No part of this publication may be reproduced.A. EAEC. Luxembourg EUR 6379 LEGAL NOTICE Neither the Commission of the European Communities nor any person acting on behalf of the Commission is responsible for the use which might be made of the following information. electronic. mechanical. in any form or by any means. 52 VANDERBILT AVENUE NEW YORK. photocopying.S. without the prior permission of the copyright owner. Directorate-General for Scientific and Technical Information and information Management. stored in a retrieval system. ISBN: 0444 85374 χ Published by NORTH-HOLLAND PUBLISHING COMPANY AMSTERDAM · NEW YORK · OXFORD Sole distributors for the U. and Canada ELSEVIER NORTH-HOLLAND INC. N. recording or otherwise. or transmitted. suppliers and encompassing the product ranges of computers. The second phase was an analysis of this data with respect to the implication for CAD development in Europe and the technology impact over the next quinquennium. This book is a record of the lectures. d. results and general conclusions of the EEC CAD Electronics Study and give delegates the opportunity to discuss the subject matter. from the conceptual specification through synthesis. non-users. problem area and impact of technology evolution. field. testing and implementation from printed circuit boards (PCBs) to very large scaled integrated (VLSI) chips. its cost benefits. communications. Hence the organisation of a three day symposium in November 1978 where a state-of-the-art set of lectures was given followed by important papers from leading authorities on the problem areas. Investigation of the opportunity in terms of strategic scientific. In these presentations a balance was retained between software suppliers' and users' views. the CAD Electronics Study commenced in June 1977 as a feasibility project with the following objectives: a. Assessment of current state-of-the-art of CAD of logic design. One of the important conclusions of this work was the appalling ignorance of CAD techniques even by those who were purporting to be using the same.P. b. Time projection of designers' opportunities and requirements within an extrapolated electronics and computer evoluation in the 1979-82 period. One study. c. with detailed justification. Recommendations for further Community work. To match these objectives a two phase project structure was used. The European Communities recognised its importance in July 1974 when they initiated a programme of studies in the D. if appropriate. industrial and economic benefit. Computer aided design of digital electronic circuits and systems is essential to the ongoing development of any electronics and associated data processing industry. military systems etc.FOREWORD "What we have to learn to do we learn by doing" Aristotle With the rapid change in technology providing the ever increasing complexity of digital systems it is essential to utilise the products of that technology in order to cope with the evolution. First a worldwide survey of CAD techniques applied to digital electronics was undertaken calling for information from users. There was also the opportunity to present the structure. papers and discussions at the symposium and covers the subject of CAD of electronics circuits and systems. simulation. The management aspects of future trends and economic viabilities are covered which . user requirements. vi FOREWORD affords the reader a wide spectrum of information. Project Bureau of the Commission. It has only been possible by the foresight of the European Commission and the dedication and forebearance of Mr. GERALD MUSGRAVE BRUNEL UNIVERSITY . This volume is clearly the work of many willing and cooperative authors whom I wish to acknowledge.P. Bir and his staff of the Joint D. De Man CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS H.-C. Teramoto TECHNICAL SESSION III ASPECTS OF A LARGE. Davignon 3 KEYNOTE ADDRESS K.A.C.W. McGuffin PRODUCT SPECIFICATION AND SYNTHESIS D. J. Kani. Avenier. Breuer LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD : ARE REQUIREMENTS CONVERGING? H.M. Michard. J. Rault. INTEGRATED CAD SYSTEM F.Α. Lewin SIMULATION OF DIGITAL SYSTEMS: WHERE WE ARE AND WHERE WE MAY BE ΗΕΤίΡΤϋ S. Lipp CAD IN THE JAPANESE ELECTRONICS INDUSTRY K.AN INDUSTRIAL VIEW R.-P. A. Klaschka COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS L. Szygenda TECHNICAL SESSION II NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Μ. R. Hembrough. J. Abel TECHNICAL SESSION IV VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN J. Pabich LARGE SCALE CAD USER EXPERIENCE F. Mutel 7 13 25 41 57 81 91 103 123 133 139 149 . Teer TECHNICAL SESSION I INTEGRATED COMPUTER-AIDED DESIGN . M.C O N T E N T S INTRODUCTORY SESSION OPENING ADDRESS E. Yamada. Loosemore E. R. R.C.F. Tomljanovich.G. Roberts.T. COMPUTER AIDED DESIGN THE PROBLEM OF THE 80'S MICROPROCESSOR DESIGN B. Hoffman INTEGRATED CAD FOR LSI K.M.viii CONTENTS page no. PROJECT SESSION EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS Introduction: Organisational Aspects: Technical Perspective: A. K.E. Quillin G. Jones. Schauer TECHNICAL SESSION V AN ENGINEERING COMPONENTS DATA BASE M. Gaskin DEVELOPMENT OF A DIGITAL TEST GENERATION SYSTEM P. Wolsk'I AN APPROACH TO A TESTING SYSTEM FOR LSI H. Musgrave 169 173 183 187 207 217 229 237 245 2¿7 255 257 303 Survey in USA and Canada: A.E. Klomp AUTOMATIC GATE ALLOCATION PLACEMENT AND ROUTING S. De Mari W. Carter TECHNICAL FORUM TECHNICAL FORUM I Chairman: Jakob Vlietstra TECHNICAL FORUM II Chairman: Jakob Vlietstra FINAL SESSION EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE Chairman: S. Colangelo CUSTOM LSI DESIGN ECONOMICS J.C. Lattin USER EXPERIENCE: IN SIMULATION AND TESTING C.E. Bir INDEX OF AUTHORS 313 317 321 325 . GARRIC. European Communities .INTRODUCTORY SESSION Chairman: C. . Badly handled. Since then. the Council has adopted a number of priority studies. This is both the nervous system and the key base technology for a modern industrial or indeed post-industrial society. It is not too much to say that the competitiveness of the large majority of European industry and services will depend on the speed and competence with which it applies the new electronic technology to its products and processes and to the services it offers during the next ten years. The parts of this complex still often go by separate names . while each year the value for that money in terms of computing power is multiplied several times. EEC. from some 1 million in 1975 to 2 million by the mid 1980s. electronic components. COMPUTER-AIPEP DESIGN Oi digital tlzuViotilc ccAcacti and iijàtvni NoiUh-Holland Publlåhoig Company © ECSC. it can lead to a vast new range of employment opportunities. from textiles and shipbuilding to steel and cars. as the French have recognised in their new word "télématique". I would like to start if off by placing it in the political and economic context of the Commission's objectives for industrial policy. the traditional industrial regions such as Europe must look increasingly to the newer technologies and industries as the main source of future economic growth. The first political recognition of this need was the Resolution of the Council of Ministers of the Community of July 1976 which called for a Community policy for data-processing. DAVIGNON EUROPEAN COMMUNITIES It is a pleasure for me to welcome this gathering which includes many of the world's most distinguished specialists in that key tool of advanced technology. far the most important is the complex of electronic industries associated with the processing and communicating of information. the information revolution can indeed lead to a new crisis of unemployment. Muigim/e. a process which we must welcome as offering them the chance to live and even thrive. Not only Europe. For these reasons. but from the United States. exploring the needs . but the developed world as a whole is in the throes of fundamental industrial change due not only to the ending of a long period of sustained economic growth. Europe has to become a high technology workshop for the world.the computer industry. but to deep shifts in its industrial structure. such as the US and Japan. As the developing nations of the world acquire competence and capability in many of the older industries. Your numbers and quality augur well for the conference. 1979 OPENING ADDRESS E. Biuiiili S LuxmbouAa. telecommunications. The facts speak for themselves: In mid-recession the market for computing in Europe is still growing at something approaching 203S per year in fixed money terms. Despite the many jobs it displaces. Imaginatively handled. But I do not need to tell this assembly that they increasingly are one. Japan. That Resolution stressed in particular the need to promote collaboration in data-processing applications. EAEC. and even the Soviet Union. We look forward to hearing contributions from leaders in the field not only from Europe. Of these new technologies.G. employment and social development. editou. we already expect the number of people employed·in the direct use or manufacture of computing power in Europe to double. computer-aided design. the Community has recognised that both the application of dataprocessing throughout the economy and the industry itself deserve vigorous public support at Community level both to help create a receptive homogeneous market and to match the immense public resources which are put behind the industry in other advanced regions of the world. the need for users to be brought together so that the power of computing could be more effectively applied. Ladies and Gentlemen. such as aerospace. Why does the Commission attach importance to CAD as a tool of economic development? Major sectors of industry. For example. CAD is clearly basic to electronics. the subject of discussion today. And when it has it. Among these were two on Computer Aided Design: a CAD study in the building and construction field and the other in Digital Circuit Design. ready access to data. mould and die industries. to be carried over and used in shoe manufacturing. with its unprecedented challenge to human skill and endeavour. The study sponsored by the Community. in every industrial period certain industries play a key part in the development of society. the large investments made by the aerospace industry to develop three-dimensional systems enable the techniques. These tools need to be available to a vaste range of medium-sized and small firms. The Council is now approaching a more critical political test. The designer must wrestle with the challenge of ever-growing complexity which only computer aids and tools can enable him to master. Modern CAD is therefore a tool which European industry has to have. This vast complex of technologies. the key industry is the complex of industries covering the processing and communication of information and using electronic technology. Today. combining the design processes with automated manufacture (as is already the case with an aircraft wing). education in the use of new techniques will be essential. It is now recognised moreover that. CAD will be one important element in this programme. DAVIGNON and feasibility of action in certain specific fields of user applications. a decision on a four-year programme for informatics which would provide more systematic and greater support for a wider spectrum of user applications. requires resources and investments which no single European nation . CAD will become an integrated part of the production process. it will have to take account of massive social implications of its introduction. in addition to the great. is designed to identify the state of the art computer aids potentially available and to suggest what the Community might do to improve them and make them more accessible. In this as in so many other fields of computing. These wider aspects of CAD are being studied systematically by the Commission in preparation for the four-year programme. We look forward to receiving your advice and hearing your views on what needs to be done. when proven. glass. in the future. These investments can acquire larger importance for the Community when the technology is purposefully transferred to other sectors of industry. particularly those using advanced technology. It will be the framework in which practical proposals emerging from the work of this conference can be implemented. as a complement to innovating technology. effective standards. our topic of discussion today. which you will be discussing over the next few days. plastics.E. electronics and the automotive industry are already forging ahead and making massive investments on their own account in this field. A strong capability in these related industries is essential to Europe's future because: the character of our society will depend on our skill in using these technologies most industries and many services will become dependent on these technologies and the remarkable growth rate of the market for these industries will continue to represent an increasing element of European and world production and wealth. the leading role will always fall to industry. Moreover. In this effort. The second is to support the development of the key electronic technologies of the future which will permit Europe to become more than the follower which it has been in the past. Europe has a vocation to ensure that in a European information society these formidable tools are in the hands of the citizen. when reinvent the wheel. and not solely in the hands of centralised power. The Community must also isations outside. however. They have.OPENING ADDRESS 5 could justifiably or possibly undertake on its own. national Governments can and will continue to play an essential supporting role. Government or anyone else. In social terms. whether management. moreover. It with you the results of from your participation be open to mutually beneficial co-operation with organthere is so much work to be done. I hope that spirit will inform your discussions too. And the third is to develop the activities in the fields of standardisation and procurement which alone can generate a true European market. I hope you will both contribute and benefit in this Symposium to which I wish all success. an immense educational responsibility in this new age. whose econimic development is held up by an acute shortage of critical software and engineering skills in the most advanced fields. . at least three vital tasks which only the Community can fulfil and which are wider than the modest programmes for data-processing which I described earlier. we cannot afford to is for this reason that the Commission chose to share this study. and whose citizens have only the barest understanding of the potential implications for them of the new technology. It is my belief that the European Community should and can make a greater contribution to the development of this world technology than it has done so far. There are. in particular. to those who develop and apply the new techniques. for we cannot accept the paradox of a Europe with many millions of unemployed. in his workplace. so that the potential economic and social benefits can be harnessed to benefit mankind. school or home. One is to ensure that the powerful broadband communications infrastructure needed in the electronic age is developed on a European scale. . is that these divisions tend to merge in many ways. EAEC. navigation) data processing (computers. This certainly should not be seen as competing with. Teer used to outline his main points. These facts are widely known and are assumed to be common knowledge at this symposium. however. 8/ui4¿e£s S Luxmboujig. The Netherlands The following are the basic notes whi c h Dr. Notwithstanding that also electronic industry is subject to industrial saturation phenomena of recent years. For the future it might be much more relevant to order the classification in terms of social categories: traffic systems. television. Especially electronic components present an investment and massproduction picture that could easily lead to overproduction. the optical technology is emerging now with special power in transmission and recording of information. telegraphy). It is easy to present amazing figures about the progress in semiconductor technology (in terms of bits and gates per square mm or per chip) and about the penetration of binary processing (in terms of traditional computer use as well as new applica tions). health care systems. production systems. With the tools of microelectronics and microoptics available there is a remarkable situation growing where central issues are on the move. control. distribution systems. Growth figures of the last decade materialized as substantially higher than those of the industry as a whole or the Gross National Product. 1979 KEYNOTE ADDRESS K. education systems. 1. office systems. radio (radio. The electronics industry is a relatively young and dynamic industry with potentially large growth figures due to a very wide area of application and a high rate of innovation. home systems etc. edUoK. data transmission) and instrumentation (measurement. EEC. the pure electronic hardware. 4. 3. Few will dispute that the "push" in the electronics field in the past and for the future is dominated by: semiconductor technology binary processing satellite technology. 5. the regular market view follows similar lines. The first two are especially related to the issue of this symposium. Teer Philips Research Laboratories Eindhoven. but as complementary to. registration). It is beyond any doubt that "micro optics" will give an enormous extra momentum to the electronic field. Muia/ιαυι. It is relevant to notice that apart from the pure electronic technology. One of the striking trends. In particular dataprocessing penetrates in almost every field. There is a standard partitioning of the electronic field in telecommuni cation (telephony. Very schemati cally we can say that the question is no longer 'how to make it' but 'how to use .G. COMPUTER-AI PEP PESIGN o$ digital iZe c t/œtU c CÀAOLUA and òyitemá No/ith-Holland PablUking Company ©ECSC. 2. radar. 8. the electronic file with powerful data and document retrieval as a comfort for almost all environments in a broad range of sizes. A first step here is to order things in various levels so that distinction is made between: better function of existing products new products new functions new organisation new social categories. maximum efficiency or maximum security. but 'how to reduce design cost'. How to use it? Most present day stories about microprocessors in newspapers and magazines start with a hard fact namely the transistors per square mm but then jump into vagueness and threat. This is in the focus point of this symposium and will be discussed by a number of speakers much more able than the author of this contribution. the electronic inspector and recognizer (of pictures. our health care. giving do-it-yourself education and active entertainment. TEER it'. 6. Indeed it is of utmost importance that the physical parameters of the devices. already a few new subsystems can be identified now without too much sciencefiction: audio and visual facilities in the home for information acquisition. sounds and other inputs) to improve failure diagnosis of objects and human beings. lower users threshold and often to faster reaction of the equipment. our education. the speech addressable equipment leading to hand-free use. employment and human dignity. However. the equivalent network. this is not sufficient: the computerized process should be standardized the standard should be easily accessible . the intelligent controller optimizing the function of non-electronic equipment towards minimum energy. A cool careful analysis is seldom available which has much to do with our inability to foresee the use of modern electronics. minimum pollution. 7.8 K. our homes will all change but how? To know better we should transfer an experimental attitude well trained in achieving new technologies to the domain of using new technology. Notwithstanding the uncertainties about applications. to amuse (in entertainment) and to express oneself (in free time creativity). and the question is no longer 'how to reduce production cost'. the picture generator as a tool to explain (in instruction). our banks. cost and clarity do benefit. the layout. It 1s true that our government bureaux. to analyse (during design). the photomasks can be achieved with the aid of automatic means. the logic concept. How to reduce cost in design? In fact the question is somewhat broader namely how to simplify the design process so that speed. the intelligent manipulator which can be instructed by craftsmen on the working floor for flexible automation. our offices. The reader is left unsure about what actually the message is but with an uneasy feeling that things might go extremely wrong in particular concerning privacy. cooperation of governments. 9. dislike of work. . That is the challenge to cooperate with categories of users in much closer coupling than the customer-supplier relation in order to explore the possible answers to the question: "how to use?". cooperation of complementary industries. Next to these 'musts' for the European industry as a whole there is an additional point for the electronic industry. the need of leisure.KEYNOTE ADDRESS the action should extend to higher levels than modules: the multichip domain. need for work. European industry is confronted with: increased competition fragmentation of market in national states growing intervention of social forces in industrial activities saturation phenomena in some product ranges indistinct and unbalanced relations between automation. cooperation of industry and government. employment. productivity. But the validity of this observation cannot be denied. In that approach new forms of cooperation are a necessity. the need for education and the demand for education. pathetic and illusion like. the level where computers are a basic building block. Cooperation of industries in the same line of business. Regrettably this may sound retoric. To attack these difficulties it is necessary to respond with an enthusiastic and original approach. As we are in this symposium as a European community it is good to realize what the position of European industry is in its social-economic context. . TECHNICAL SESSION I Chairman: J. France . BOREL. Centre Nucléaire. . 1979 INTEGRATED COMPUTER-AIDED DESIGN .G. 1 3 . EAEC. Also an orthogonal view is taken of CAD system design and from that. Uuignave. editou. EEC. The various aspects of an integrated total technology CAD systems are presented and discussed. COMPUTER-AIPEP PESIGN o& digital elect/iOrUc üAcuit* and ¿yitem& Nolth-HoUand Pubtiiking Componi/ ©ECSC.AN INDUSTRIAL VIEW R W McGUFFIN INTERNATIONAL COMPUTERS LIMITED MANCHESTER ENGLAND The underlying reasons for the growth of CAD systems are examined. some pointers to the future examined. Biuíiteti í Luxemboung. then DA acts as an amplifier. Component Placement .W. Thus. Nothing could be further from the truth. Given a product structured for automation. The position or placement of the components is the prime parameter but this in turn is modified by track density profiles. combined with an emerging. McGUFFIN A general industrial overview presupposes that there is a common understanding in industry of what Computer Aided Design is. these factors. trying desperately to reduce exceedingly long timescales and high costs. The interaction need not be real time. INTRODUCTION R.this is defined as automatic design translation or pushbutton design. it does balance ease of design with ease of production. communication and information flow.14 1. in fact there are many good reasons why the interaction should be via batch job turn-round. fiercely competitive industry. a designer is performing experiments on a model of the system he is designing. however. this is only achieved by sets of rules. Superficially. from a purist point of view. Engineering design is a process of decision making in order to produce information to enable correct manufacture.a designer may wish to minimise the total length of copper track on a printed circuit board. The essence is that the computer is processing information supplied by the designer and yielding results that enable the best design decision to be made. but merely a convenience to aid the understanding of an integrated approach to the subject. codes of practice and compromises which make the design translation amenable to automation. CAD . However. caused it to become a leader in development and exploitation. the designer can interact with trial placements in order to optimise around these parameters. In reality. These are not definitions. Examples of this are as follows: (a) Simulation .1 Scope and Definition of Terms The phrase 'computer-aided design' has. however.le subdividing CAD into two categories. technological rules etc. It is fair to say that the industry which has done most to understand the nature of.here. (b) DA . it is worthwhi. a vast amount of experience has been gained on where CAD may be used cost-effectively. However. . This view is confirmed by the nature of the product which ranges from integrated circuit chips through printed circuit boards and operating system software to mechanical frames and piece parts. It is not claimed that one CAD system will handle this diversity of design disciplines. The design algorithm is embedded in a program rather than in the mind of the designer. it could be argued that this happened because there was in it a surfeit of cheap computer power and people who could understand (program) them. 1.This is defined as the interaction of a designer with a computer in order to aid design decision making. develop and exploit CAD is the computer industry. CAD embraces the use of computer systems to improve decision taking. why it is used and what its benefits are. become devalued since it now embraces activities which relate to the design process but are not necessarily in the design loop. The consequence of this is a restriction on design freedom. For the purposes of this paper. The simulated results of these experiments will cause the designer to modify the parameters under his control and hence obtain an adequate design compromise.Design Automation . in many ways are more deserving candidates for the attention of CAD. Smaller concerns had started to see the advantages of CAD but did not have the necessary expertise/computer power to develop their own systems. Design integrity demands that the system concept be faithfully translated perhaps through many levels of design decision. graphics etc) and software. early 60s. tailored the hardware and software to the problem. this is the principal area of application. in general. Also.INTEGRATED COMPUTER-AIDED DESIGN . with VLSI accelerating upon us.AN INDUSTRIAL VIEW 15 Instictively. Up to this point. networks etc. . machines were relatively simple to understand (and hence. It was in this atmosphere that the 'turnkey' CAD system started to evolve. increaseth sorrow'. the production problems mushroomed and there was an increasing demand to provide numerically controlled machines with output from the design data files. erstwhile semiconductor manufacturers are rapidly becoming acquainted with the problems they bring. especially computer operating systems. in many ways. CAD hardware (computer power. the degree of assistance required in the design and manufacture of them. they could enjoy the advantages of CAD without the birth pains. indeed. this problem will be examined more thoroughly. was the salvation of the small company since. However. but it is instructive to examine the underlying reasons why there has been an acceleration in the growth of CAD systems. However software. It is not sufficient to be able to store/manipulate/delineated integrated circuit patterns unto a mask. into the designed product. the world has linked CAD with hardware. Although some leap-frogging has taken place. 2. This period saw the growth of CAD/DA 'systems'. Hitherto. the complexity of the product increased by an order of magnitude. was minimal. in this paper.. Manufacturers which are in the 'system' business must seek total systems solutions and. Here. HISTORICAL DEVELOPMENT OF CAD The milestones in the development of CAD have been well-documented elsewhere and there is little point in reviewing them here.fun to play with but not much relevance to design problems. By the mid 60s to early 70s. in the computer world. It is in this world that the integrated CAD system finds its living. CAD was developed 'in house' by large manufacturers with large problems. computer graphics under the guise of CAD systems had been sold by over-energetic salesmen and were proving to be 'white elephants' . This. with the advent of TTL small scale integrated circuits and the multi-layer printed circuit board. Hardware and software engineering experts started to combine to form small companies. for a modest outlay. both have kept pace. CAD was the sole preserve of small isolated teams of programmers and dedicated hardware. Further as the complexity increased. attempts were made to rationalise the product design requirements with the CAD tools available. to quote from Ecclesiastes 'He that increaseth knowledge. design!) consequently. and made a lot of money. In the late 1950s. One of crucial importance is design integrity. They tackled a limited range of problems (predominately integrated circuit design). How many hardware projects consume 200 to 600 man years? When considering design management. have grown with the size of the problem to be solved. System problems are many-faceted. the logic content of the computer is stored as 'pages' of logic. Further. Techniques such as multistrings (highways. as with most industrial 'in house' products. Compressed logic data capture and automatic expansion to the level of detail required for implementation. The tool itself has been optimised for interactive usage. the design may be expanded. this is concerned with complex logic elements. The current ICL Design Automation system is called DA4 ie: it is the fourth generation of a CAD system. McGUFFIN In this section. for example. timing race and hazard analysis etc. rolling evolution is a direct result of an unclear view of the future at any point in time. As distinct from high level simulation. and pattern comparison performed to ensure safe design decisions. This evolution has. when it was conceived (1974) it was considered that this provided the most cost-effective solution to ICL's design problems.there is a continuing debate on whether microprograms are true software or hardware conveniences. These rough diagrams are coded by technicians and entered into the design database. Simulation may be performed to confirm that the machine will obey. From the DA4 view point. the primary concern is with design automation (translation) since. they provide a useful source of test patterns for simulation. Because of the company structure and the willingness of computer designers.an interactive tool used by many computer design projects. but as will become clear. Further. Logic simulation . reduce errors and show better logical flow. The primary design task in ICL is to design logic which will be physically realised to make computers to make money and DA4 was tailored to this task. been controlled. nominal and worst case delays. Since the service was provided on ICL 1900 range of computers under George 3. it has evolved during its life. the whole computer can be thought of as an enormously large logic diagram. THE INTEGRATED CAD SYSTEM R. the DA4 system. the basic order code..here the computer is considered at the architectural level in terms of structure and behaviour.W. where the output from the microprogram assembler is often being burnt into Proms and the flow diagrams going to the field engineer etc. When the system level description has reached a level low enough to be translated into detailed logic diagrams. they constitute a vital part of the total technology and as such must be supported. test-gear designers etc. buses etc) and macrosymbols reduce the drawing and data entry problems and hence save time. Standardisation leads to dramatic savings in both design and production problems. As may be seen from figure 1. This task is tedious and error-prone. The overall structure is shown in figure 1. I am going to draw upon the ICL experience with its CAD systems. I believe it can be justly described as an 'integrated system' although. This diagram is cut into manageable portions (say * * * * . in an orderly top down fashion. technologists. The model library contains around 600 descriptions of the elements currently used in ICL computers. DA4 provides a 'total technology' outlook: * High level system design language . The overall CAD task is to balance ease of design with ease of production. engineers sketch the designs on gridded paper. Conceptually. Logic data file .. (database and tools) became the unifying influence in the design and production. to co-operate with the DA team. Microprogram assembly . the operating system was used to control the housekeeping of the file store.16 3. As the title implies. for the most part. AN INDUSTRIAL VIEW 17 MICRO PROGRAM ASSEMBLERS HIGH LEUEL SYSTEM DESIGN DETAILED LOGIC CAPTURE PROMS LIBRARIES RND TABLES LOGIC DESIGN DATABASE SIMULATION GROUP CHECKS Map Into Physical DRAWINGS ASSEMBLY FILE DOCN PLACEMENT & TRACKING PCBs PROD. ICL's INTEGRATED CAD SYSTEM . CONTROL ARTWORK TESTS BACK PLANES CHIPS CABLES Fig 1.INTEGRATED COMPUTER-AIDED DESIGN . from the assembly file the following may be performed: * Automatic functional test pattern generation. are to be mapped into a physical assembly.400 ECL gates on an uncommitted logic array. Production Output . Assembly drawings.high reproducable quality. will require 12 million bits of test data. dot matrix printer. in brief. This basically logic description of the machine is used to construct the building blocks . boards containing 30 IK RAMS. Two basic forms of output are used: line printer .W. 1500 logic gates and 10 256 bit Proms.integrated circuit chips. Control documentation for manual modification of boards. Base Board Test . Verification of manually produced test tapes. Photographic artwork. The process of mapping logical into physical is called 'Assembly Extract' and is performed with the aid of an engineer generated 'flyfile' which describes which pages or parts of pages. Production control documentation.the tracks on printed circuit boards. This process creates an assembly file upon which act a different variety of tools.this is an entire subject in its own right but. Fully validated manual placement. typically 300 . printed circuit boards. Using fault simulation the quality of the tapes may be assessed and diagnostic resolution determined. like many major computer manufacturers. before component insertion. multi-layer back planes (platters) and cables. Assembly Extract . ICL. The page is also a convenient drawing unit. Automatic technology-rule-obeying placement of components. Drill tapes for a wide variety of machines on many sites. logic cross-references and physical placement.18 R.as previously described. To save computer power (money) some rules are applied to logic design. The information to * * . Typically.the variety of output is large and the data expansion up to three orders of magnitude. * * * * * * * * * * Automatic tracking of printed circuit boards. The logic page contains all the information necessary for the design and field engineer alike. may be checked for unwanted open and short circuits by means of computer controlled probes.fast and inexpensive. Version control. Automatic tracking of integrated circuits. silk screens for component insertion. ie: it is the basic medium for communications. Typical boards contain up to 150 dual in-line integrated circuits. The benefits of these rules are that many thousands of board and IC chips types may have test patterns generated automatically. McGUFFIN 1000 gates) and called a page. is diagram-based. the data file contains 'pages of logic'. Testing . for example. . Whether this interface is a natural design language or graphical is of second order importance. incompatibilities etc.functional tests may be applied to each component. * This is only a cross-section of the ICL design automation scheme but does give a flavour of the 'total technology' outlook.2 Conflicting Pulls The overriding benefit of CAD is cost reduction. 4. Timeliness. Group Checks . physical and logical. use of less skilled labour. in many respects. if we fully understood the design decision making process. 4. Managerial control. I will try to take a more orthogonal view of CAD systems capitalisation.AN INDUSTRIAL VIEW control this equipment is generated from the assembly file. then we would be able to provide methodologies and tools that would provide a safe (error free) design evolution. the task of the CAD system designer is to provide a framework in which design may take place in a controlled manner and tools to assist this process. on that group of boards which comprise a logical group. . Given that this is not the case. This l i s t . or group of components on a board by probes (reading and writing) at preselected points. Effective use of manpower . This may be translated into: * * * * * * Labour reduction.INTEGRATED COMPUTER-AIDED DESIGN . try to indicate some of the shortcomings of today's approaches and where the future lies. The prime requirement is that the design engineer should be in harmony with his tools and that they should provide fast response on the implications of design decisions. the logic pages which describe the computer are partitioned into chips and boards. SYSTEM CAD In this section. can be avoided.. company organisation etc. This framework will have many attributes but the most important is probably the man-machine interface.1 Design Not enough is known about the nature of design. * 19 Probe Test . The latter.as previously described. The framework (design database) should be capable of creating lines of communication between related design activities such that duplication. Alternatively. and from it. is a continuing problem. Timescale reductions. it makes sense to perform checks. although incomplete. However. 4. Communication. Error reductions/design integrity. This is achieved by selectively powering up the components to be tested. is much more economic than simulation. does indicate the c o n f l i c t i n g pulls in CAD. The allocation of probe points and generation of tests is performed automatically. or the lack of it. These checks include physical loading rule and timing path checks.ease of design/manufacture. Reduced timescale compared to manual method. Unless benefits such as these are identifiable.3 Capitalisation In this section I will discuss the computer hardware required to support CAD systems. the size of computer and variety of peripherals is dependent upon the types of application.W.simulation. despite the benefits. 4. drill tape production etc. the most important point is that undercapitalised CAD is a recipe for failure. The rationale underlying this arrangement is: (a) A considerable proportion of the CAD service work is of a data processing nature .. Obviously. the freedom of the designer will be continually eroded unless a total systems view of the CAD system is taken. For example. struggling against impossible timescales. minicomputers have made inroads into the mainframe business. resilience and response is poor. correct power consumption etc. In no way can all these requirements be satisfied without compromise. (b) (c) .minimum silicon area. an automatic printed circuit board tracking program provides: * * * Labour reduction. If the service provided in terms of reliability. For example. automatic tracking etc. in many respects. require fast response but demand considerable computer power. With some four hundred users of the service. we have management requiring the tools to control the design of complex products such that they feel that they are in command. to produce an error-free design. they very quickly will become disillusioned and will justifiably claim that using a computer. Unfortunately. However. is actually slowing them down. however.20 R.. I do not want to get into the mini -v. over the last five years. As the complexity of the product increases. at the outset. These conflicting pulls upon the CAD system have not been satisfactorily met to date.. a considerable amount of file management is required. the area of compromise is in the freedom of the design engineer. at ICL I believe we have achieved the partition of computing activities between mini and mainframe which suits our activities. Reproducable results. an engineer may design an integrated circuit to all the conventional constraints . Some jobs . However. McGUFFIN At the one end. Large mainframe computers have been the traditional workhorses of CAD. To generalise. but if it is non-testable his work has been wasted. and is best suited to the background batch type of job. At the other end.mainframe arguments. Users of CAD systems expect to derive tangible benefits from their 'conversion' to CAD. The configuration is shown in figure 2. the number of users and volume of job throughput.logic group error reports. we have production demanding a product which they can make at a price marketing will accept. In the middle. then one should rethink whether CAD is appropriate to the solution of the problem. is the design engineer. the use of CAD must provide at least three quantifiable benefits to the user. TAPE DIGITISER DOT MATRIX PRINTER UTILITIES INTERACTIONS TABLETS TERMINALS DISK STORE Fig 2. MINICOMPUTER CONFIGURATION .AN INDUSTRIAL VIEW 2 1 DESIGN DATA-BASE LINK TO MAIN FRAME MINI GRAPHPLOTTER MAG.INTEGRATED COMPUTER-AIDED DESIGN . drawing office. To take an extreme example. Typical peripherals are: * * * * Graph plotters and dot matrix printers. In ICL we have found that. Test Engineering . Many of these problems. they are 'front ends' to the CAD system and thus. the partition must be constantly reviewed. However. sprang from the way in which software design was viewed more as an art form than a science. cannot be considered in isolation from the mainframe/design database. The combination. . the magnitude and timeliness of support meets the product requirements. semiconductor device fabrication. Operating systems segment.design of in-house test equipment. However. evaluation of OEM testers. and has a favourable relationship with the manufacturing division.5 Software CAD Every programming manager responsible for large or medium scale projects must have felt that the problems associated with software production were inherent in the very nature of the software design process.interactive logic diagram modification. McGUFFIN Other activities are of an interactive graphic nature . software planning and costing exercises tended to remain a rather hit-and-miss affair. By ignoring the techniques which are standard practice in hardware engineering. However. if they are located in the research department on a site remote from their customers. their impact on the company's products will be less than significant. LSI circuit design . 4. for (d) various graphic work stations have been provided on different geographic sites within the company.these require 'instant response' which can best be supplied by a minicomputer.computer design projects.4 Company Organisation It is impossible to generalise on this subject. Project activities (a) to (c) are provided on the mainframe. Project Teams .printed circuit board technologies.W. In ICL the teams which provide hardware and software (operating systems) CAD are under the same manager. in the areas where CAD/DA is cost-effective. enjoys equal status with: * * * * * Technology . Editing tablets and digitisers. Interactive storage screen terminals. the position and status of the CAD system generators within the company is of vital importance. Magnetic tape back-up. In general these are used for LSI circuit design. Design Services . 4. technical authors etc. as more powerful minis are being produced. This 'equal status' relationship ensures that.22 (d) R. called a segment. component evaluation. interactive logic design and overall drawings for technical publications. this is a useful configuration of hardware and partitions of design activities. however. at present. However. However. Hardware CAD excels in the range of tools available to create. ie: design or . manipulate and produce design. the use of a formal design definition language.ie: within the intellectual scope of man. however using conventional SSI and MSI packaged devices on printed circuit boards does not present all the problems of integrated circuit design . the lessons learned are equally applicable to hardware projects. ADVANCING THE STATE OF ART In reviewing the history of CAD. it has to be recognised that there is a considerable price to be paid to ensure effective design co-ordination and communication. 5. In hardware design. Also.the principal difference being that PCBs are a 'modifiable' technology.the early symptoms of which are a creeping paralysis in design. the product complexity has remained within tolerable proportions . ICL decided to overcome these traditional software production problems by developing an in-house technology of software design. it was observed that the computer aids kept pace with the complexity of the problems. enhancement and maintenance of final products. code documentation and standards. more to build and debug and a great deal more to enhance and maintain.AN INDUSTRIAL VIEW 23 Then there was the seemingly inevitable gulf between design and implementation. on the threshold of embarking on a major programme of software development. and project control procedures. This is particularly true for computer manufacturers where. the use of computer-aided design and design automation techniques. particularly structural preservation.when design decisions are delegated often the nett result is that the nature of the product changes since the effect of the decision is not reflected upwards. This was aggravated by the imprécisions caused by using a natural language in the design whereas there is no reason why the language used for expressing high level design concepts should be any less exact than the language chosen to express that design to the machine. The initial design time for integrated circuit chips is lengthening at a predictable rate. Over six years ago. however the time and the number of interactions required to 'get it right' is escalating. In large projects there is a requirement to be able to identify and preserve the overall structure . This is structural decay. with VLSI we can see a complexity barrier approaching . The more cynical among the system manufacturers could claim that the semiconductor companies are starting to experience the problems they have lived with for years. led to many misunderstandings and inefficiences. This approach to software design and control (Cades) has been thoroughly aired and documented elsewhere.INTEGRATED COMPUTER-AIDED DESIGN . pieces of code the size of George 3 or 0S360 cost a lot of money to design and code. for example. This technology took a unified view of the formalisation of the design process. in the next generation of CAD system this cannot be ignored and the software experience should be utilised. the testing of these devices is starting to become unmanageable in that the goal of 100% testing is steadily retreating. with an analyst specifying the design in a natural language like English for subsequent translation into machine-executable form by a different group of implementors. but whether 20. An automated design technology must also take into account the problems of tests and validation and of release. The traditional approach. The management of design has been less emphasised in the past but. 50 or 200 people are involved in a project. To control projects involving a large volume of code and many people there was also a need to have automatic facilities for progress monitoring and control as part of the philosophy underlying the design structure. This is partially true. to determine how the test patterns for large integrated circuits will be determined.it is not sufficient for turnkey CAD system vendors to claim that they have the tools for VLSI. Of course. but from detailed concern. 'Systems' are a combination of hardware and software . the key points for the future: * Total design capture/coming together of disciplines .they cannot be considered independently. The concept here is of the abstraction of test patterns from the functional description of the system (hardware or software) used for design.W. is not accurate enough to detect/diagnose all possible faults. three points are clear: (1) design for testability. however.24 R. There should be as natural a relationship between the two in the CAD systems as there exists in the product. not from their responsibilities for production. The conventional model (nodes stuck high/stuck low) it could be argued. * * (2) (3) The convergence of design disciplines will remain a problem through to the 1980s but. top down test pattern generation. . This is not so with integrated circuits and the onus is on the designer to minimise the number of design interactions and to produce a testable product. in conclusion. Facets of this are the design language and the simulation of the interactions of hardware and software at various levels of design. This may be translated into the effective use of design automation to free designers. There is little point in designing something which cannot be tested with contemporary methods and equipment and in a time period which reflects the complexity. On the other hand. the use of this model may be uneconomic. Testing . we can look forward to VLSI with genuine confidence. with networks comprising many millions of nodes.it is difficult. So. McGUFFIN production errors may be fixed with the aid of a soldering iron. hardware design problems are only a facet of the total complexity barrier . Design and manufacture . the roots of the problems lie much deeper.the next generation of CAD systems must support both hardware and software design and production.a keystone in successful industry is the effective use of design manpower. as CAD systems evolve which will give us a key to the solution of this problem. fault model. PRODUCT SPECIFICATION The most Important property of any CAD scheme is the ability to be able to accurately specify the system under consideration using a suitable representation. The objective of this paper is to review the current "state of the art" in product specification and synthesis. micro-computers. In so doing it is concluded that no suitable specification and design system is available at the present time and possible reasons are given why this situation exists. In order to control and manage this complexity (in both software and hardware realisations) It has become necessary to enlist the aid of computers. Uutgiwui. including such techniques as directed graphs. A formal specification of the system not only ensures that the user requirements are correctly translated into an acceptable design but also provides the essential basis for contractual and design documentation. ROM's. PLA's. EEC. A critical survey of existing methods of system specification. 2. RAM's. though successfully used at the logistics and manufacturing levels. The major problem is the sheer complexity of the systems which are now feasible using LSI and VLSI subsystem modules such as micro-processors. Middlesex The specification and evaluation of computer systems at the initial user requirement level is one of the most important and critical aspects of digital systems design. hardware description languages. simulation languages etc. synthesis techniques and design methods for secure and reliable systems. FSM theory. INTRODUCTI ON Current digital and computer systems have now reached such a high degree of sophistication that conventional design methods are rapidly becoming inadequate. Brunei University Uxbridge.just managing to cope with current technol-ogy could well be faced with a major dilemma in the near future. followed by an attempt to define the fundamental problem area and future requirements. Unfortunately CAD. editan. there Is no viable specification and design scheme available for digital systems. EAEC.. and industry . 1. designers and ¡mplementers. has not as yet realised its full potential when applied to system specification and the conceptual design stages. and computer aided design techniques are now becoming accepted as essential design tools. This is evidenced by the singular lack of success in attempting to develop realistic specification and evaluation languages. COMPUTER-AIPEP PESIGN c(5 digital eZectiionic CIAJCWCU and iyitxm Nonth-HoLiand Publl&king Company © ECSC.G. Note also that a formal specification of the system not only ensures that the user requirements are correctly translated Into a viable design but also provides the essential 25 . At the present time. In so doing the basic principles and similarities of the techniques which have emerged so far will be described. is presented fol lowed by a brief review of the current state of synthesis methods. 1979 PRODUCT AND SPECIFICATION SYNTHESI S Douglas Lewin Department of Electrical Engineering and Electronics. as shown by a recent EEC feasibility study (I). Bnuiteti S LuxemboMg. It is essential that the specification language should be able to provide an unambiguous and concise description of the system and be capable of serving as a means of communication between users. etc. designers . In order to handle complex digital structures a specification language must be able to describe the system at several levels. etc. Note that the RTL procedures can be used for documentation and simulation purposes. LSI and MSI chips. Finally. At the top level is the behavioral (information flow) description which treats the system as an interconnection of functional modules specified by their required input/output characteristics. including the algorithmic state machine (ASM) approach. I Register Transfer Languages ' The intuitive design procedures used in digital and computer systems engineering are normally centred around a predefined register configuration. at the lowest level. software data representation etc. such as hardware description languages (including register transfer languages) simulation languages and some general purpose high level languages such as APL.and implementers. for example in terms of Boolean equations. these techniques may be generally classified into three basic approaches. The first register transfer language was proposed by Reed (4) and was non- . Thus the declarative section of the language in essence forms a linguistic description of the block diagram of a machine. occurrence graphs. ability to handle concurrent processes and to provide insight into alternative partitions of the system. act as a means of communication between users. A typical register transfer language description (Chu's CDL) for the LOAD Instruction of a computer is shown in Table I. state tables. Petri nets. Finite State Machine (FSM) techniques. LEWIN basis for contractual and design documentation. which describes in detail the actual gates. The execution of a required system function (for example. used to physically realise the subsystem functions. Graph-theoretic methods. facility to formally represent and evaluate the information flow in large variable systems at the behavioral level and also to analyse data flow at the functional level. regular expressions. a machine-code instruction) is then interpreted in terms of micro-order sequences (called a control or micro-program) which govern the necessary transfers and data processing operations between registers. timing diagrams. employing transition graphs. ii) iii) The principle methods will now be considered in more detail in the following sections. which are as follows: i) Functional descriptive programming languages. this partitions the system into subsystem components and details the logical algorithms (micro-programs) to be performed by the components with their corresponding highway transfers. etc. such as state-tables. At this level it should be possible to represent the algorithms in a variety of ways. bistables. it is also possible to generate Boolean design equations directly from the RTL descriptions. Register transfer languages are based on this heuristic design procedure and allow the declaration of register configurations (the data structure) and the specification of the required data flow operations (the control structure). that is on a hierarchal basis. Numerous methods have been described in the literature for the description and design of digital systems. flow charts. able to proceed directly from system description to physical realisation using either software or hardware processes. flow-charts etc. An ideal specification language should have the following characteristics: a) b) c) d) e) capable of representing logical processes independent of any eventual system realisation. is the structural (implementation) representation. The next level down is the functional (data flow) description. with RTL operational procedures being used to specify the control programs. 2·.26 D.. LDT was a formally defined procedural language and included highlevel A LGOL type operators such as IF. Though not originally conceived as a register transfer language APL (10) has been extensively used for algorithm definition and the description of computer architectures. THEN. Note that A. Another ALGOL based language (though nonprocedural) was described by Chu and called CDL (9). : A + A I.. Schorr's language not only provided a more practical means of documenting microprograms but also had the distinct advantage of being fully implemented using a syntaxdirected compiler based on ALGOL 60. such as counters. it was used essentially as an algorithmic language for defining microprograms. A LERT was implemented on the IBM 7094 machine and used to reproduce the design for an IBM 1800 computer. having no facilities for block structures or adequate means of handling branching operations such as test and jump instructions. CDL was used primarily for the specification and simulation of digital systems and is still widely used in teaching. The Reed language however was very primitive.PRODUCT SPECIFICATION AND SYNTHESIS 27 procedural in nature with a small vocabulary directly related to hardware elements. suitably optimised. Due to the nonprocedural character of the language it was necessary to prefix each statement with a conditional label (either a clock pulse or flag value) detailing the conditions for executing the operations defined by the RTL state ments. and analysis. thus the notation could be used to represent both synchronous and asynchronous systems. it is possible In the language to write statements of the form: |t. = I and t| = I the operation ΑΛΒΚ) is performed and the next state ment to be executed occurs in ^ 5 . using a sequence chart approach (8) which enabled the individual register transfer operations to be displayed against time. branching and conditional transfers as well as the basic RTL operations. ISP (instructionsetprocessor) was Initially developed to describe primitives at the programming level of design in the PMS and ISP descriptive system due to . A LERT was basically a conventional RTL system with provision for translating the microprogram description into a minimised set of logic design equation for the registers and control logic. that is a jump to \*¡ takes place. GOTO etc. Reed's language was also used as the model for the LDT (logic design translator) language developed by Gorman and Anderson (6) and Proctor(7). ELSE. i f S3 = I the alternative operation takes place. In particular the language has found acclaim in the teaching of digital systems (II). with microprogram statements being directly translated into the Boolean Input equations for the bistable registers. i f 'S". I * tg I » · t2 where. Unfortunately CDL had the major disadvantages of functioning in a synchronous mode. This language had the advantage of being able to describe special operators (such as count up/down). predetermined sequences. LDT also enabled a timing analysis to be performed. Moreover the language had facilities for performing logic synthesis. More important however was the introduction of subroutine facilities which allowed system modules. no facilities for block structures and the inability to describe independent concurrent operations. etc. A PL has also been used by IBM as the basis of the ALERT (automatic logic design generation) system (12) with modifications to allow the expression of control and timing functions and the representation of block structures and parallel processes. Β and D are registers and S3 a flag bistable or register stage. though the resulting design was logically correct it was found to be highly redundant in terms of hardware. thus enabling a hierarchal descrip tion to be employed. directly from the RTL description. and a form of GOTO statement. Schorr (5) extended the Reed language by including timing pulses as an integral part of the conditional statements.? 3 | |tjS 3 | : Α Λ Β D. to be declared as high level blocks. For example. adders. The main function of LDT was the derivation of the bistable equations. and a viable cost-effective system still remains to be developed.Table 2 shows examples of the more usual operators and declarations. CASD was based on PL/I and used its block structuring facilities to develop the hierarchal specification. The LALD (language for automatic logic design) system (20) allows a multilevel system description in terms of Interconnected sub-system components. Consequently. CASD.for example. where the outermost block defines the whole system in terms of subsystem blocks (automata). each possessing "private" facilities and having access to "public" facilities (common busses) which are used for intercommunication between automatons. A CASSANDRE description consists of defined units and their Interconnections.28 D. LALD compilers have been reported for the CDC 6400 (using SNOBAL) and in PL/I for the IBM model 91. Though it would appear that considerable effort has been expended on the development of register transfer and hardware description languages very few have been adopted for use in a real engineering situation.. The language has been implemented on an IBM 360/67 machine but only used for logic level simulation and micro-program evaluation. The description itself is in Reed-like statements and contains the usual register transfers ana operators. LEWIN Bell and Newel I(13). by storing the state of the system in registers which can be tested and modified using special operators. DDL is a non-procedural language and uses the concept of finite state machines to control operations . none of them have facilities for representing a partitioned system consisting of interconnected autonomous modules. was never implemented. One such language is CASSANDRE. including special operators and declarative statements for the system level description . In addition since the system is described in terms of a topological model. The CASD language (19) (computer aided system design) encompassed high level system descriptions. each unit may itself comprise a network of units. In the digital design language (DDL) described by Duley and Dietmeyer (15) a system Is viewed as a collection of several subsystems or automatons. However ISP has been implemented and used to describe and simulate computer architectures (14). In DDL a system Is specified using a block structured description. and the inner blocks specify the automata in terms of their state and I/O behaviour. rather than by formally . Moreover. proposed by Mermet and Lustman (17) which was based on ALGOL and uses the block structures of that language to achieve system partitioning. The control and data structures must be specified separately and the control structure can be Implemented using either hardware or software. many of the systems described above have been outdated by the rapid progress In microelectronics. simulation at both systems and logic levels and automatic translation to detailed hardware. government agencies. Problem orientated programming languages suffer from the inherent disadvantage that they have no formal mathematical structure. Though some of the languages described above have the ability to describe subsystem blocks. In particular it has successfully been used to perform comparison studies of computers for military use (15). In order to evaluate logic networks modelled this way It is necessary to perform a physical step-by-step examination of all the relevant input-output conditions.bas i cal ly a feasibility study. As well as being able to describe digital systems the DDL specification can also be translated into Boolean and next-state equations to describe a hardware realisation. system behaviour must be interpreted indirectly from program performance whilst operating on certain specified data types. and is being seriously considered as a standard hardware description language by U. Hardware description languages usually describe a digital system In terms of simulated components and their interconnections. no implementation of this language has as yet been reported. It will be obvious that this ¡s a time consuming process and that large amounts of storage would be required to represent the circuit model. A similar language is the CRISMAS system (18) which also uses a hierarchical block-structured definition language. global variables. ISP is similarin characteristics to other register transfer languages but with facilities for handling block structures and concurrancy and the simple sequencing of processes. Other system design languages have been described in the literature. input-output requirements etc.S. the implementation of which draws heavily on FSM theory. serial systems (such as pattern detectors) where the computation can proceed as a step-by-step operation on the input. for example. who showed that any finite state. require to have al I the Input data available before the computation can proceed. Consequently with multipleoutput circuits it is necessary to derive separate regular expressions for each output terminal. Due to its finite memory limitation (that is.2 Finite State Machine Techniques Finite state machine theory. since register transfer languages are constrained to operate on well defined data types. inversely. synchronous automaton can be described by a regular expression. particularly if both control and data structures are represented In the same state-table.. Thus regular expressions constitute a formal language which can be used to characterise the external (input-output) behaviour of sequential circuits (combinational circuits being treated as a special case). using for example state-table representation. Though regular expressions would appear to have many of the characteristics required by a specification language. Finite state machine methods. It will also be obvious that the method automatically specifies both the control and data structures and hence would certainly lead to computational difficulties with large variable circuits. the description ¡s of limited value for general communication purposes. they are normally restricted to hardware representation. though theoretically capable of describing any digital system is not viable In practice owing to the considerable practical difficulties involved in expressing large variable problems and the inordinate amount of computation required to manipulate the resulting structures. deterministic. Moreover.PRODUCT SPECIFICATION AND SYNTHESIS specified system functions. and If the concept of separately defining data and control structures is used state-tables can still be a useful aid in design. the number of internal states) the FSM is bast suited to describing systems where the amount of memory required to record past events (that is the effect of earlier inputs) is both smaI I and finite. such as serial multiplication. In general the FSM accepts a serial input (or Inputs) and progresses from state to state producing an output sequence (or sequences) in the process. The method was successfully used by Hewlett Packard for the design of calculators etc. However large systems must inevitably be partitioned by the designer into sub-system components in order to comprehend their complexity.. also suffer from a more fundamental disadvantage. The use of formal methods. direct implementation etc. using a derivative of a regular expression described an easy-touse and systematic method of transforming a regular expression to a state-table. but currently no computer implementation is available. encountering considerable difficulties in converting from a verbal description to the algebraic formulation. for system description would appear to have considerable potential . as well as having practical drawbacks. However some processes. Another basic disadvantage is that the language is really only suitable for FSM's with a single output terminal. Contrary to what has been written the method Is not easy to use and design engineers find the formalism very difficult to apply. For example. This is borne out by the algorithmic state machine (ASM) approach to design (21) which uses a flow-chart to specify the control logic for a system. 29 2. This Is undoubtedly true. Thus the behavioural description for a FSM can be reduced to an algebraic formula. Regular expressions are used to describe the required set of input sequences (in terms of algebraic operations on sequences of O's and I's) to a FSM in order to generate an output.these techniques are described in the following sections. every regular expression can be realised as a finite state machine(23). Moreover large amounts . A formal approach to system description based on FSM theory was originally described by Keene (22). there are considerable disadvantages in practice. and that. Later work by Brzozowskl (24). such as FSM and graph theory. a formal structure capable of analysis. and the amount of Information required to be 'remembered' Is very small. 2. The Petri net Is an abstract. since it is always possible to convert a transition graph into an equivalent statediagram (27).63. e2. note that a net may have parallel edges. A n Input sequence is said to be accepted by the graph if a path exists between a starting and terminal vertex. However in general it is difficult to derive a transition graph which faithfully represents a required machine specification. Another directed graph approach which has found considerable application in the description and analysis of digital systems is the Petri net (28)(29).} . Each arc connects a place to a transition or vice versa. with the relationships between them being indicated by arcs or edges. formal graph model of information flow In a system consisting of two types of node. which can be represented by matrices for computer processing. it is incompletely specified. A directed graph is a mathematical model of a system showing the relationships that exist between members of its constituent set.e&. parallel computation schemata. that is two nodes connected by two different edges but both acting in the same direction.η5} and the set of edges by E = {e|. Directed graphs have been used. A sequence of directed arcs through the graph is referred to as a path and describes the input sequence consisting of the symbols assigned to the arcs In the path. in the former case the place is called an input place and In the latter an output place of the transition. It consists of a set of labelled vertices connected by directed arcs and in every graph there is at least one starting vertex and at least one terminal vertex. LEWIN of information could need to be stored during the course of the operation.es. diagnostic pro cedures in logic systems etc. where the available memory Is theoretically uni Imi ted.64. Figure 2 shows a . such as the Turing machine (25). The major advantage of using graph theory. Transition graphs have the advantage over statediagrams (which are a special case) in that it is only necessary to define the input sequences of direct interest. Is that formal methods exist for the manipulation of graph structures.η4.1} in the case of a binary system). The elements of the set are normally cal led vertices or nodes. (for example.30 D. connected by directed arcs. a net which does not contain parallel edges but with assigned values to its edges is called a network as shown in Figure Ic. The transition graph also provides a convenient shorthand for representing deterministic machines. Thus it follows that the FSM has the inherent disadvantage that it is impossible to specify a machine which requires the manipulation of arbitrarily large pairs of numbers. These limitations can of course be overcome by using an Infinite machine model. The places correspond to system conditions which must be satisfied in order for a transition to occur. apart from the obvious visual convenience. this implies that the Input sequence of interest must be of known finite length. Thus the transition graph Is nondetermnistlc in the sense that. the accumulation of partial sums in the case of multiplication). alternative input transitions being omitted. A gain. Note also that the FSM lacks the ability to refer back to earlier in puts unless the entire Input sequence Is initially stored.3 Directed Graph Methods One mathematical tool which is finding increasing application in computer systems design and analysis is graph theory (26) and many of the more successful specification methods are couched in graph theoretic terms. Graphs may be classified into various types depending on their properties. An example of a directed graph is shown in Figure la where the set of nodes is given by N » {n|. places drawn as circles and transitions drawn as bars. Each directed arc is labelled with symbols from the input alphabet of the machine (I ■ {0. for instance to represent information flow In control and data structures.Π2.η3. For example a net shown in Figure lb is a directed graph consisting of a finite nonempty set of nodes and a finite set of edges. unlike statediagrams. The Transition graph is a simple example of a directed graph used to represent automata. (31) Petri nets can also be used to model a) b) c) . Progress through the net from one marking to another. It is also possible to define sub-classes of Petri net. A Petri net marking is a particular assignment of tokens to places in the net and defines a state of the system. Thus we have a fundamental difficulty that the more powerful a model the more difficult It is to algorithmically determine its properties. such as including inhibiting arcs have also been suggested. note that this model is directly equivalent to a finitestate machine. This is illustrated in figure 2. Moreover. thus the modelling power of the Petri net can be considered to be slightly below that of the Turing machine. of particular interest is the statemachine. In general a conflict will arise when two transitions share at least one input place. where 2a shows the original marked net and 2b the state of the net after firing transition a. Petri nets have been extensively used to model and evaluate the control structures of logical systems in both software and hardware design. Another limitation imposed on the model is that a place must not contain more than one token at the same time: this condition leads to a safe Petri net. thus only one transition can occur at a time).PRODUCT SPECIFICATION AND SYNTHESIS 31 typical Petri net. In addition to representing the static conditions of a system the dynamic behaviour may be visualised by moving markers (called tokens) from place to place round the net. corresponding to state changes. since it has a finite number of states. for example. However the Turing machine. Further extensions to the basic model. Petri net models are normally constrained to be confIict free. After two further firings the net would arrive at the marking shown in figure 2c. thus firing means that instantaneously the transition inputs are emptied and a I I of its outputs filled. irrespective of the marking that has been reached. can theoretically provide the answer to any question concerning Its behaviour. a transition is enabled If all of its input places hold a token any enabled transition may be fired a transition is fired by transferring tokens from input places to output places. Note that a live net would still remain live after firing. hence It can provide a more faithful representation of complex system behaviour. thereby allowing a place to contribute <or receive) more than one token. is determined by the firing of transitions according to the rules. In software design Petri nets have been used to model the properties of operating systems such as resource allocation and deadlock situations (related to the liveness of a net). A I I ve Petri net is defined as one in which it is possible to fire any transition of the net by some firing sequence. which restricts a Petri net such that each transition has exactly one input and one output. is very difficult to analyse if a definite answer to a behavioural question Is required. In addition it has been shown (30) that it is possible to replace the individual elements of a Petri net by hardware components. it has been shown that any generalised extension of the Petri net is equivalent to a Turing machine. a Petri net with tokens is called a marked net. thus providing a direct realisation of the control circuits. It is usual to represent the presence of tokens by a black dot inside the place circle. note that the Petri net is able to depict concurrent operations. because of its unbounded memory. (Note that transitions cannot fire simultaneously. An essential property of any model Is that it must be possible by analysis to obtain precise information about its characteristics. here the net is said to be in confi iet since firing either of the transitions d or e would cause the other transition to be disabled. The FSM model for example. The Petri net is considerably more powerful than the FSM model in that It can represent concurrent operations and permit indeterminate specification. The Petri net model described above may be extended into a Generalised theory by allowing multiple arcs between transitions and places. In figure 2a the marking of places Β and C defines the state where the conditions Β and C hold and no others. one for data flow (the data-graph. but these in the main rely heavily on conventional switching theory. for example in reducing the required surface area of the chip. Unfortunately semiconductor technology has progressed to such an extent that the use of minimisation methods and Implementation in terms of NOR/NAND logic Is no longer relevant. The specific question of logic circuit synthesis has become subsumed by the general problem of computer systems engineering. RTL's. LOGOS. which utilise multiple output SOP's terms. The CALD and MINI systems. In addition the computational problems encountered ¡n attempting to perform an interpreted analysis (involving both CG and DG structures) of an activity was found to be extremely difficult. and no allowance is made for operations performed in conjunction with the data structure. Thus at the systems level It is no longer possible to divorce hardware and software techniques. In the main the techniques apply to the control graph function only. for example. Though these procedures have some application In the design of MSI sub-system components. though even here conventional techniques are of little use and certainly . the resulting circuits being realised In terms of basic gates and bistable elements. Though the LOGOS system was the most ambitious attempt to date to develop an Integrated CAD system. Purpose designed synthesis systems such as CALD (37) and MINI (38) employ a tabular or cubic notation to input Boolean design equations and then use heuristic techniques to obtain a near minimal solution. many of the specification techniques described above incorporate some method of hardware realisation. including the vital topic of specification and evaluation. and that on an experimental basis.32 D. SYNTHESIS OF DIGITAL SYSTEMS ( 3 6 ) An essential prerequisite to any synthesis package is a suitable specification and evaluation language. together with numerous other examples of synthesis techniques (39). Specific hardware design techniques are still required at the LSI component level. Moreover. for example. algorithmic techniques for the realisation of systems using ROM's. LEWIN hierarchal structures. it nevertheless still requires considerable further development before it can become a viable design tool. The major advantage of the directed graph approach is that it is amenable to mathematical analysis and many authors (32)(33) have described algorithmic methods for their analysis. Though it was found possible to realise the control operators In the CG the problems of transforming the DG components was never fully resolved. all rely heavily on classical switching theory. as an alternative to hardware. and implementing FSM's using ROM's (40) where minimising the states will reduce the number of words required In the memory. Project LOGOS (34)(35) conceived at Case Western University was based on Petri net principles and had the objective of providing a graphical design aid which would enable complex parallel systems to be defined (on a hierarchal basis) evaluated at any level and then finally implemented in either hardware or software. The situation is becoming even more critical now that programable electronics such as microprocessors and micro-computers are being used as sub-system components. As we have seen. CG) to define a process leal led an activity). The LOGOS system employed two directed graphs. etc. Current exceptions to this are in the use of PLA's. known as an uninterpreted analysis. ASM charts. 3. Notwithstanding. and It is essential that any synthesis procedure should take Into account the design of software. PLA's. theory has been (and is still being) outpaced by technology and a major and severe problem now exists due to the lack of a suitable design theory at the sub-systems level. since an entire net may be replaced by a single place or transition at a higher level. Though Petri nets have many of the properties required for the specification and design of digital systems to date there has been only one example of its use In a CAD system. otherwise the problem is reduced to one of minimisation and implementation of design equations. in system realisation. it is essential that a "top-down" approach to design be adopted to allow the system to be partitioned into viable and compatible hardware and software processes. etc. DG) and one for control flow (the control-graph.. their usefulness in system design Is strictly limited. The specification techniques described above have included both special purpose languages and graphical methods. merging them together. and maintained. since the by adopting a general systems approach. Another fundamental problem is encountered In the analysis of large systems. present serious difficulties. It Is vital that the dichotomy software engineering is obviated. but not actually implemented each time. There are two distinct cases when subsystem blocks are required: a) to represent a component or sub-routine which will be used by the system many times over. Another disadvantage is that the languages tend to generate very simple constructs. b) The major difficulty comes in isolating identical functions and. an arithmetic unit or any complex data-processing structure. Another problem occurs in the generation and use of library routines for components used to represent complex MSI and LSI circuits and other data structures. This is due to the languages providing only simple elements and the users perpetuate the situation by designing at a low level. as for example in LOGOS. allowing general logical processes to be specified without reference to a particular Implementation. This means. particularly if an unbounded model is adopted. for example. that only particular paths through a Petri net are allowed. It has already been suggested that the FSM model has severe limitations when used to specify complex systems. In general a detailed analysis of the modelled system proves to be prohibitive In computer time. Using this type of model the designer is unconstrained in his thinking. and does. if progress is to be that now exists between hardware and synthesis problem can only be solved There are many difficult problems to be solved before a viable specification and design language for digital systems engineering can be developed. It will be obvious that from the users (and designers) point of view the use of formal graph theory could present an intellectual problem. Though graphical techniques have a visual advantage it would appear that a language approach based on formal methods would be preferable. It would appear inevitable that. but they are specifically hardware orientated. evaluation must be performed using simulation techniques. These limitations can of course be overcome by using Infinite or unbounded models such as the Turing machine or Petri net. and since formal methods are not possible. which needs to be implemented as such in various places in the system. it is necessary in order to determine the system operation to constrain the analysis to a restricted set of input and state conditions. The problem is also relevant when considering the implementation of Petri net schema. the insertion of a standard hardware component (analogous to a software macro) such as a multiplexer unit. for example. 4. if necessary. Unfortunately the transformation from a conceptually unbounded model to a practical realisation can. DISCUSSION 33 of synthesis is wide open and may must be solved. Register transfer languages are adequate for the design of register structured systems. In general.. and the technique results in a loss of information and affects the accuracy of the model. It will be apparent that the whole question difficult problems remain to be solved. . If exact information about a system is sought the Petri net must be examined (ideally in an interpreted mode) for all possible firing sequences.PRODUCT SPECIFICATION AND SYNTHESIS they will not be able to cope with future VLSI circuits. and generally for all systems which separate the control and data functions. if a detailed analysis of a logic algorithm is required (say in seeking the answer to a specific question) there is no other choice but to examine all possible alternatives in an iterative manner. It is this fact which accounts for much of the redundancy encountered in RTL implementation schemes. This would be possible of course for a closed system with a small bounded set of input and state variables. aspect to the problem. even to the extent of reconfiguring its original structure. and in the case of a digital system should be recognised as such.from user level specification. With the complexity of today's systems It is naive to expect to be able to derive design algorithms which ensure that a known and correct output will occur for a given set of input conditions. CONCLUDING COMMENTS D. and consequently a probabilistic approach. or similar alternative. but is there any hope of finding a solution to the basic problems? In the short term much might be accomplished by jettisoning the philosophy of attempting to develop a general design language which can serve all purposes . digital systems with user Interaction (say a sophisticated real-time system) have the characteristic that the fundamental structure (or algorithm) can perform many different functions depending on how the basic input set is modified by the user. There is also another. Computer science today is concerned primarily with the derivation of efficient (and correct) algorithms for sequential machines based on a deterministic binary model. The urgent need for developing a design automation scheme is indisputable. For example. Directed graph techniques appear to hold the most promise as the basis for specification and analysis. perhaps even more fundamental. LEWIN It would appear that at the present time there is no ideal specification. Thus to continue looking for design methods which will ensure complete and absolute correctness may well be a futile search and at variance with all we hope to achieve in the future. It follows directly from this assumption that it may be impossible to predict exactly what the response of a system will be to any given stimuli. are more characteristic of an open system since in practice they possess an unbounded set of Input and state conditions which corresponds to an interaction with the total environment. The need for a system level theory capable of handling the Interconnection of LSI modules working in a concurrent mode has already been stressed. Sutherland and Mead (41) go further in suggesting that a new theoretical basis for computer science Is required.and concentrate on a specific language and methodology for each function with well defined transformations from one to the other. must be employed. rather than the theoretical analysis of sequential algorithms.34 5. . evaluation and synthesis scheme available for digital systems design. but there are nevertheless many fundamental problems remaining to be solved before a viable CAD system can be evolved. This property is typical of an open system. however. based on spatial distribution and communication paths. Unfortunately the complex systems which will be required in the not-too-distant future require a fundamental reappraisal of the available theory. through evaluation down to system realisation . Thus the scientist searches for a solution which enables system behaviour to be explicitly described by some mapping or transformation of the Input variable onto the output. Systems today. Clock F (6) F (7) F (8) END Pd) Ρ (2) Ρ (3) TABLE 2 Declarations and Operators in the DDL Language a) Declaration Type MEmory or REglster TErmlnal BOolean OPerator ELement STate Automaton SEgment SYstem I Dent I f 1er Time Hardware n or onedimensional arrays of bistabies n dimensional set of wires. terminals or buses logic network defined by Boolean equations combinational circuitry shared among facilities InputOutput terminals of standard module defines states of an automaton defines an automation composed of FSM and facilities defines portion of the automaton which contains the declaration defines a system with K automata and the system's pubi le faci IItles assigns identifiers to previously defined operands periodic clock or signal generator . 0. D + ■ count up D CM(H) ) ELSE (Η + 0. R > 0) R(OP) » R(0 5) R (A DDR) = R(6 17) F (A DDR) = F(l 5) M (C) = M(0 4095.PRODUCT SPECIFICATION AND SYNTHESIS TABLE I CDL Description for LOAD Instructions Register.0 17) CM (H) = CM(0 31.Μ (C) H t R (OP) . R A C D F (0 (0 (0 (0 (I 17) 17) II) II) 18) 35 G H (I Subregister. I ■ 18) Power (ON) Start (ON) Stop (ON) Ρ (I 3) Memory. Switch. 5) Buffer register memory M Arithmetic Register Address register for memory Program register Buffer register for control memory CM Stop/Start control register Address register memory CM Operation code part of register R Address part of register R Address part of register F Main Memory Control register Power switch Start switch Stop switch Three phase clock R ι. D * 0. C If (G) THEN (F C R(ADDR) . P. J. execute CSOP.P. Γ ·. Reed: Symbolic Synthesis of Digital Computers. I. IEEE Trans. 1952 9094. CSOP. M. Schorr: ComputerA ided Digital Systems Design and Analysis using a Register Transfer Language. Electronic Computers ECI3 (1964) 730737. II. Comm. Roth: Systematic Design of Automata AFI PS FJCC 27 (1965) 10931100. John Wiley New York 1962. Peterson: Digital Systems : Hardware Organisation and Design.. D.F. LEWIN b) Operators Activation Connection φ A VID (CSOP) ID = BE CSOP is a set of operations that effect automaton A VID The terminals ID are connected to the network defined by Boolean expression Memory elements ID are loaded from network defined by Boolean expression Execute a transition to state SID (in the same block) Execute a transition to state NID in segment SEG and return to state RID upon execution of a return operation Return to the state specified by a transition type 2 Transfer Transition type 1 Transition type 2 ID ï BE ■* SID =*· SEGID(»NID. Anderson: AFI PS FJCC 22 (1962) 251261. If BE=0 execute CSOP« REFERENCES CAD Electronic Study Report on feasibility study commissioned by EEC Hardware Description Languages IEEE Computer 7(12) (1974) (Special issue). Barbacci: A Comparison of Register Transfer Languages for Describing Computers and Digital Systems.36 D. 6. . Computers ECI3 (1964) 422430.ACM 8 (1965) 607615. Hill and G.R. 10.R. H. ACM Sept. K. IEEE Trans. A Logic Design Translator. R. Proctor: A Logic Design Translator Experiment Demonstrating Relationships of Language to Systems and Logic Design.*RID) Return transition ΙΓ li THEN THEN ELSE I BE | I BEI CSOP. Iverson: A Programming Language. Chu: A n ALGOLlike Computer Design Language. IEEE Trans.S.E. F. Computers C24 (1975) 137150.J. Proc. Gorman and J. John Wiley New York 1973. Elee.. C S O P Q If BE=I. October 1978.M. Y. execute CSOP| If BE=I. H. J. Schriften des RhelnischWesterälIschen Institutes fur Instrumentelle Mathematik an der Universität Bonn 2 (1962). New York 1973. T. Dietmeyer: A Digital System Design Language (DDL). 14. Sym. W. Baray and S. Il (1964) 481494. J. Minsky: Computation Finite and Infinite Machines Prentice Hall Englewood CI I ffs NJ 1967. Brzozowskl: A Derivative of Regular Expressions. Even and A. IEEE Trans. J. Comp. IRE Trans. 15. 34 (1956) 341. 30. Parker: Using Emulation to Verify Formal Architecture Descriptions. System Sci. Frandeen.Sarre and Β. Dickinson and M.C.R. 17. 29. Brzozowskl: A Survey of Regular Expressions and their Applications. Crockett. IEEE Computer 7(10) (1974) 3947. Pneu I i : Marked Directed Graphs. J. New York (1975) 16. Isberg. 24. 8 (1961) 585600. Rev. Bryant. Feinsteln: Design of Sequential Machines from their Regular Expressions. S. J. P. Computer Science Dept. S.L. 25. 16. A.1976. AFIPS SJCC 36 (1970) 287296. Proc. 21.A .S. F. Commoner. Duley and D. A CM Computing Surveys 9(3) (1977) 223252. M. IEEE Computer 11(5) (1978) 5156. 26. C. Stigall and 0.Operationel le No.R. C. J. 5 (1971) 511523. Mermet and F. E. Holt. 32. 8th Annual Design A utomation Workshop (1971) 122. Dennis: Concurrency in Software Systems. J. I5B3I3F (1968) 335. 27. P. C. CAD IEE Pub. Friedman and S. D. Princeton University Press. Proc. J. J.Inf. Barbacci: The ISPL Compiler and Simulator User's Manual.Y.R. 28. A ug.L. Mach. Peterson: Petri Nets. Kleene: Representation of Events in Nerve Nets and Finite Automata. CarnegieMelon University M.Waterlot: CRISMASS : A Tool for Conception Realisation and Simulation of Sequential Synchronous Circuits.PRODUCT SPECIFICATION AND SYNTHESIS 37 12. S. 18. Petri: Communication with Automata. G. Mach.L. Patii: On Structured Digital Systems. F. A FIPS SJCC 36 (1970) 351374.A .D. A.H. Bell and A. Copp.Fr. on Computer Hardware Description Languages and their Applications.B. Automata Studies Annals of Math. 51 (1969) 5971.A . 31.B. M. Su: A Digital System Modelling Philosophy and Design Language. C. Assoc.R.D. Chapter 6 23. Computation Structures Group Memo 651 Project Mac MIT June (1972) 118.O. Lustman: CASSANDRE : Un Language de Description Machines Digitales.C. J. Yang: Methods used In an Automatic Logic Design Generator (A LERT). IEE Conf. Comp. Tasar: A Review of Directed Graphs as applied to Computers.Rech. . Clare: McGraw Hill Designing Logic Systems using State Machines. Electron Computers E C U (1962) 324335.W. IEEE Trans. M. Report. Barbacci and A. Assoc. Paige: Computer Aided System Design.G.H. Studies No. 19. No.Leraillez. Computers CI8 (1969) 593614. Tech. 22. 13. Newell: The PMS and ISP Descriptive System for Computer Structures. Comp. Ott and N. 20. Int. Computers CI7 (1968) 850861. Rose: LOGOS and the Software Engineer. I. 38. Dev. Lewin.G. SholI and S. 34. Purslow and R. Appi. Lewin: Computer Aided Design of Digital Systems. W. F. Computer Science Press Inc. Queuing. IBM J. E. 41. Van Cleemput: Computer Aided Design of Digital Systems . J. Crane-Russak New York 1977. Mead: Microelectronics and Computer Science.38 D. Heath: The LOGOS System.E Mi I 1er: Properties of a Model for Parallel Computation : Determinacy.A. Math. Bennetts: Computer Assisted Logic Design . H. CAD IEE Pub.A Bibliograph.the CALD System.E.G.J. Woodland Hi I Is Calif. IEE Conf. 86 (1972) 225-230.G. 35. 37. S.A Heuristic Approach for Logic Minimisation. D. on CAD IEE Pub. 14 (1966) 1300-1411.L. 40. 86 (1972) 343-351. Sutherland and C. AFIPS FJCC 41(1) (1972) 311-323. IEEE Trans.M. IEE Conf. Karp and R. R.A. 39. Hong. D. R. (1976) 13-93. 36. Cain and D.W. . Yang: Design of Asynchronous Sequential Networks using Read-OnIy Memory. Res. Scientific American 237(3) 1977 210-228. 18 (1974) 443-58. C.C. Terminations. Ostapko: MINI . LEWIN 33. Computers C24 (1975) 195-206.M. PRODUCT SPECIFICATION AND SYNTHESIS 39 Figure 1 .DIRECTED GRAPHS Pi Parallel Ec Edges a)Directed graph b)Net c ) Network . Β Transition. LEWIN Figure 2.PETRI Token x NETS D ... a) Marked net b) Net after firing c) Conflict situation .40 D. 000 elements. i. Therefore. 1979 SIMULATION OF DIGITAL SYSTEMS: WHERE WE ARE AND WHERE WE MAY BE HEADED* S. and the level at which one simulates the network. The reason for this is that problems with the timing become more critical and correction of these timing problems after fabrication becomes more costly. EAEC. The more accuracy and detail that is sacrificed the higher the level at which one can simulate the network. Prototyping. Results of this work are also presented. the constraining point in future development is going to be the ability to verify and test such systems. which can be used for logical verification. in an attempt at satisfying the objectives stated above. when the prototype is established to be working correctly. or detailed analysis. is digital logic simulation. while maintaining the level of accuracy required. and simulation of faculty networks for the verification of test sets. EEC. modular or functional level simulation. Storage and run time results for some existing functional elements are * This work was supported in part by Comprehensive Computing Systems & Services Inc. will be impossible. Manual design and test set verification. Ono way to increase the capability of simulation. as well as a discussion of the evolution of this capability toward functional simulation. The reason for this is that prototyping of a system often occurs in a technology other than the one that is eventually used for the system. Szygenda Electrical Engineering Department EMS 517 The University of Texas Austin. The objectives of our work have been to increase the capabilities of simulation-both simulation of a fault-free network for logic verification and timing analysis.A. will also become impossible for these large systems. The following sections of this paper present some concepts which we have been developing. and not its timing properties. in a reasonable manner. INTRODUCTION As LSI technology gives way to VLSI's technology. and generating tests. and test set verification.e. Staiteli S Luxembourg. is to deal with digital logic at a more abstract level. TX. Texas 78712 I. the only thing that has been verified is the logical correctness of the device. This is particularly true when one is considering accurate timing analysis. The third method. This includes the capability of simulating. Section II of this paper will consider the state of the art for simulation. for large networks in a cost effective manner.. which has been a commonly used method. not decrease. for such systems. The problem with this approach is that as the integrated circuit density becomes larger the requirements for accuracy actually increase.. Austin. and these elements are normally low level Boolean gates or flip-flops. The way this is normally accomplished is by making a tradeoff between accuracy.od digital elejittonlc cÁAaUtò and iyitenu Nonth-Holland Publliliing Company ©ECSC. The problem is that the state of the art of digital simulation today is only adequate to process 5000 to 20. in order to be able to handle VLSI. 4 1 . timing analysis. However. for various design verification and fault simulation applications. this is called logic verification. is also discussed. although a number of questions still remain with respect to increased accuracy and efficiency of functional level simulators. Functional Simulation Functional simulation can be considered as part of the present day simulation capability. difficult to use.42 also given in this section. with similar accuracy to that achievable at the gate level. Before proceeding with this discussion. considers the development of algorithms and data structures to support very accurate modeling of functional units for non-fault and fault simulation. including detection of races and hazards. race. S. It is generally accepted that these early simulators suffered from a lack of accuracy and high cost of use. within the limits of the simulator. We will consider some . they can handle shorted faults. their use was restricted to logic verification only. zero and one representations for signal values.A. Section IV. making timing analysis impossible. considers a general diagnostic test generation system that would interact with a fault simulator. because of the infinite number of possible fabrication errors. they can perform oscillation control and they have numerous other user options. Furthermore. even intermittent faults. EXISTING SIMULATOR CAPABILITIES The measure of a good simulator is the accuracy and efficiency with which it does its job. and plagued with model inaccuracies. shorted input diodes. and has demonstrated both advantages and disadvantages of this technique.a distinction should be made between the terms design verification and logic verification. They also permit different types of fault models. topological concepts are considered. demonstrate that digital logic simulation was feasible and necessary for design verification and fault diagnosis. II. to some degree. hazard and race analysis. and two value models. In this section. stuck-at-one models. as much as possible. multiple faults and. In addition to the familar classical stuck-at-zero. Automatic generation of element models to be used in the simulation. Early simulation systems unfortunately possessed neither of these qualities. however. these present day systems can simulate rather large number of elements. It is also quite clear that exact simulation and modeling of a sizable physical system is impossible. They were extremely expensive. Section V. where timing. primitive single output Boolean element types. Present day systems are capable of doing detailed timing analysis (using multivalued simulation philosophies) with spike. Hence. The executable code was of a compiled variety. SZYGENDA In Section III we discuss automatic partitioning of a digital network into small combinational units which can be simulated at a higher level. These systems only considered classical stuck-at-one and stuck-at-zero faults as their fault models. Design verification is accepted by most to mean that the logic correctly performs the function that the designer intended. Early simulation systems were characterized by zero delay models for the elements. This objective has given rise to present day simulation systems. Occasionally a subset of the design verification problem is considered. it is essential that the gap between the simulation model and the physical system be reduced. namely. including depth of sequentialness and degree of sequential ness. This work utilized concurrent simulation concepts. complex transformation faults. and hazard analysis are not performed. etc. They did. Since the number of gates. However. the size of these problems is rapidly exceeding present simulator capabilities. The results for a few selected functional elements. . the following questions are typical of those that must be answered before true functional level simulation can be achieved. Therefore. some functional simulation capability does exist today.How are faults internal to the functional models handled? It can be easily demonstrated that numerous internal faults do not manifest themselves as input-output pin faults. III. in the most complete sense. we should remember that true functional simulation should in fact sacrifice a minimum amount of accuracy. for both design verification and fault simulation. from this system. MODULAR SIMULATION CAPABILITIES Even with the advent of functional simulation capabilities. we will discuss some of the more recent research that is underway in this area. many applications still require an accuracy only achievable at the gate level. the partitioning of the gate level network into modules must be automated. It must be remembered that the objective of function simulation is to lose a minimum amount of accuracy while making considerable gains in speed and storage. the philosophy of modeling only input-output pin faults does not provide for complete coverage of internal faults. achieved nothing. therefore.How is timing through the functional models handled? . with reduced storage and run time requirements.How are faults propagated through the functional modules? These are not I/O or internal faults to the module.SIMULATION OF DIGITAL SYSTEMS 43 results that are presently achievable with functional level simulators in this section. .How are multiple values propagated through functional modules? . This situation has prompted our work in modular simulation. considerable additional work lies ahead in the area of functional simulation. These results clearly demonstrate that significant gains can be achieved in both storage and run time. . However. A functional system that exists today is the CC-TEGAS3 1 ' ¿ . utilizing functional simulation capability. in the networks being considered. Therefore. If we sacrifice too much accuracy. where these questions have been answered in a limited sense. .How are illegal inputs to these modules handled. This will be discussed in more detail in later sections of this paper. they are faults that may occur upstream of the particular functional module. including propagational delays? . The objective of this work is to partition the net in a manner permitting simulation at an accuracy similar to gate level. and what output is produced when illegal inputs exist? .How do we achieve accurate modeling of the functions.How does one provide nested functional simulation capability with the accuracy desired? While many of these questions still remained to be answered. In a later section of this paper. due to either storage or efficiency limitations. i system. in fact. are usually in excess of ten thousand.What are the most efficient implementation techniques for functional simulation? . are provided in Table 1. and they must be propagated through the module. we have. As the number of modules is not expected to more than an order of magnitude less than the number of gates. and then automatically generating evaluation routines from these strings. 1 .Nominal Delay . this condition is extremely desirable.5% It should be noted that the savings of storage and run time becomes more spectacular as the complexity of the device increases. + d~ 1 ^3 + d d.Fault Simulation MULTIPLEXOR . Once these modules are formed.44 S. gradually removing delays from gates and forcing them towards the networks inputs.Ambiguity Delay .A.Ambiguity Delay Fault Simulation STORAGE (% of that Used in Gate Level) RUN TIME (% of that Used in Gate Level) 31% 37. 4o:: so. and evaluation of algorithms for automatically partitioning networks into prefix strings. the writing of these evaluation routines must also be automated.Fault Simulation SERIAL SHIFT REGISTER (32 bit example) . d l 3 Fig. The justification for performing the partitioning is based on Theorems which proved the validity of moving the output delay of a gate to the gate's inouts and then combining these input delays with the output delays of the gate's fan-ins. d + d l 2 G Λ 2 "1 b d l + d 2 l d. The remainder of this section will consist of a discussion of the development. This process is demonstrated in Fig. SZYGENDA TABLE 1 EXAMPLE FUNCTIONAL SIMULATION RESULTS* ELEMENT BCD DECODERS . before they can be simulated. 1. Obviously.Nominal Delay . implementation. This process can be performed repeatedly. 13' 27% 42' 16 14' 16 7% 7% 7' 8% O" 8.: 31 ! 32. routines to evaluate them must be written.Nominal Delay . thus.Ambiguity Delay . by previous operations. One such restriction arises in the case of reconvergent fanout. This process is repeated until the primary inputs have been reached. 2 ) . 7. Fig. delay moving must cease at a point of reconvergent fanout. is added to the actual delay of the gate and then propagated to the outputs of the fanins.SIMULATION OF DIGITAL SYSTEMS 45 There are of course some restrictions in the delay moving process. is traced back until all paths ending at this gate reach a termination point. The process is repeated until all paths ending at the primary output chosen. A ttempting to move the delay past this point pushes the delay towards the networks outputs. As each gate is included in the module. This point will then serve as the input for the module currently being formed and the output of another module. These modules will be interconnected in the modular level network. 6. the input of a gate is tied to the output of a gate fur ther downstream. as the gates are encountered the logical description of the collection of gates being included in the module is built. have been traced to a point of termination. which formed a termination point. With this background we can consider the actual partitioning algorithm. 3 Partitioning Algorithm The basic steps in the algorithm are as follows: 1. All gates encountered along the way are included in the module being formed. the delay which has been propagated to the output of the gate. each gate. 2 Another restriction in the delay moving process occurs in the case of feedback (Fig. 2 G d 2 l G l d l + d„ ^^^ ■ "2 Fig. therefore. The entire process is repeated for each primary output. a collection of modules will have been formed which include all gates in the network. Also. This of course would present an ambiguous situation. Next. 3. 8. This point will then serve as an input to the module being formed and an output of another module. 2. instead of the inputs and would result in the delay being propagated infinitely since the feedback forms a loop. along with a logical . In this case. In this case it is pos sible for two different delay values to arrive at the output of a gate (Fig. After the last primary output has been traced. + d. Starting at a primary output follow back each path until a point of multiple fanout is encountered. 3 ) . d. these new techniques show great promise for increasing the size of networks which can be simulated. which would be compatible with parallel fault simulation.46 S. 3. This necessitated the use of two words to represent each signal. due to the fact that It was not known what the best set of termination conditions was. Further research is being performed in an attempt to extend the partitioning capabilities in order to eliminate the limitations stated above. 9. Using the techniques described. The termination point. all gates must be traced back—begin ning at the primary outputs and finally ending at the primary inputs. For the depth first search no hierarchy of queues is needed. in a general circuit.A. it is possible to build a logical string directly during the search. For more detailed delays the modular level network formed may be more pessimistic than the original gate level network. The tree is necessary since it is not possible to form a . In spite of the present limitations. SZYGENDA description of each module and the necessary input delay information. The modules formed are limited to single output modules. would not be discovered until a search along another path encountered the point again. or feedback) then the depth first search seems to be the best choice. Automatic Generation of Element Routines Produced by the Partitioning Algorithm Since parallel fault simulation techniques are. Once the tree is built it can then be traversed in some predeter mined order. referred to as the CV and the CV2 word. logical string or equation for the gates being traced directly during a breadth first search. For the breadth first search a hierarchy of queues (or other similar data struc tures) must be used to perform the search and a tree must be built to store the logical description. Implementation of Partitioning As mentioned in the algorithm description. This tracing back of fanins essentially constitutes the traversal of a graph and can be done either breadth first or depth first. However.the most widely tested and uti lized. This would mean that the size of the original network would be cut by 50 to 75%. It is necessary only to keep a stack to perform the search. reconvergent fanout. Also. . which was passed. The only drawback of a depth first search is that it must be possible to recognize a termination point as soon as it is encountered. It should be noted that a separate module must be created for each primary input and primary output. and possibly end at some other point. 2. limitations of the present system are as follows: The network which can be modularized is limited to single output gates. at which time major "îodifications would have to be made. The 1. 1 and χ (unknown). If the termination conditions are limited to those used in this work (terminating at any point of multiple fanout. The timing equivalence of the modular level network can only be guaranteed in the case of nominal delay. a program was written and examples were run. the decision was made to use an equation method of evaluation. one can expect between 2 and 4 gates/module. since a depth first search is equivalent to a preordered traversal of a tree. It was found that. to form a logical string. Additionally it was assumed that three valued logic would be used. the breadth first search technique' was chosen. or else the search would continue far past the point. 0. This work indicated that automation of simulation evaluation routines. For example the Boolean equations = A Β Bl = A + Β B2 B3 = A'. and generated a subroutine trailer. Because of this isomorphism the algorithm developed consisted of a lexical scanner to verify both the syntax and semantics of the input equation and output FORTRAN statements which implemented the corresponding CV and CV2 equations. mapped onto Tl : (CV(C) = CV(A) CV(B) CV2(C) + CV2(A) + CV2(B)) T2 : (CV(C) = CV(A) + CV2(B). for logic functions. The same types of extensions. formatted the equations output by the scanner.SIMULATION OF DIGITAL SYSTEMS 47 A representation for the logic values was chosen which resulted in a minimal number of terms in the CV and CV2 for "and". Additionally PASCAL supports recursion. The remainder of the algorithm generated a subroutine heading. PASCAL was chosen as the implementation language because it is well structured and the available compiler gives excellent run time diagnostics. . as described for partitioning. CV2(C) = CV(A)). This outputs the equations.^ Code Generation Algorithm Due to the choice of the bit representation for the three logic values. Implementation of Code Generation The code generation algorithm was implemented on a CDC 6600 and was written in PASCAL. Figure 4 shows the general algorithm while a more detailed version of the lexical scanner routine is shown in Figure 5. (CV2(C) = CV2(A) CV2(B)) T3 : (CV(C) = CV2(A). "or". and "not" operations. is feasible and efficient. by implementing the lexical scanner recursively. In addition such a capability seems to have additional possible uses as a tool for designers to model their own defined functions. in terms of logical operators rather than logic operations. there existed a simple isomorphism between the set of Boolean equations for two valued logic and the set of CV and CV2 equations for the three valued logic. are being considered for automatic code generation. thus it was not necessary to maintain a stack or construct "pop" and "push" routines. end if. write header for status of subroutines to s t a t u s . of subroutines generated to s t a t u s . of subroutines attempted to s t a t u s . if no error in equation then if equation has trailer then increment count of subroutines generated successfully. open s t a t u s . open o u t p u t . of subroutines generated for which t r a i l e r was missing on Psuedo Code Program to Generate Evaluation Routines for Logic Modules Fig. i n i t i a l i z e count of subroutines successfully generated. STATUS-FILE.f i l e .f i l e . OUTPUT-FILE).f i l e . trailer error: write message that subroutine was generated but trailer was missing. no error: write message that subroutine was successfully generated including module type. begin open i n p u t . to status-file end case. if no error in header then write header for subroutine to output-file. case error of error: write error message to status file including module type. SZYGENDA PROGRAM GENERATE (INPUT-FILE. perform check of header for errors and obtain module type id and increment count of subroutines attempted.f i l e .A. repeat set error flag to indicate that no error yet in present equation. i n i t i a l i z e count of subroutines lacking t r a i l e r s .f i l e . i n i t i a l i z e count of subroutines attempted.f i l e . increment count of subroutines lacking trailers. end program. 4 .48 S. f i l e encountered on i n p u t .f i l e . process equation and generate subroutine body and write it to output-file. u n t i l end of write number w r i t e number write number equation. write trailer for subroutine. else set error flag to indicate trailer missing. i f ch is a variable then w r i t e variable. else if ch=* then else 49 w r i t e "AND(". propagating ambiguity areas through the network. count:= pop stack. i f count=2 then write " . Fig. a fault is handled in the same manner as the good element.. accurate timing analysis can be performed to detect spikes caused by the presence of faults. The simulator structure supports modeling of any user defined non-classical faults like: (i) functional behavior faults. ACCURATE FUNCTIONAL SIMULATION CAPABILITIES This experimental work centered around the development of concurrent simulation techniques. memory/no memory elements as well as gates.e. utilizing minimum/maximum delays and. The simulator was designed to handle any number of signal values. for simulating at the functional level. hence. endif endif end i f while count=l do write " ) " . push 1 onto stack. support multiple input/output. the accuracy of simulating a fault is the same as the accuracy of simulating the non-fault model.SIMULATION OF DIGITAL SYSTEMS begin clear stack. push 0 onto stack. the number of signal values comes into play when modeling an element or device. else endif enddo end. Also. while not end of file do get ch. (iii) timing faults. Since. The work concentrated on being able to simulate functional modules. count:= pop stack. with the minimal amount of sacrifice in accuracy. push 2 onto stack. i. push 1 onto stack end i f Procedure for Generating Evaluation Routines Using Statement Functions from a Prefix Equation. 5 IV. The algorithms and data structure developed and implemented. faults that effect propagation delays . if ch=+ then write "0R(". enddo.". in concurrent simulation. " . (ii) technology dependent shorted signal faults. push 2 onto stack. if count=2 then write ". The run time for networks simulated in the present concurrent fault simulator were very high. It has also been determined that what contributes to the search time. Although future work needs to be done in this area. when modeled by its functional behavior rather than its internal gate circuitry. in that any previously induced fault activity can continue to escalate and still saturate the available storage space. then one can make decisions about when new faults should be injected into the network. The flip-flop model uses different propagational delays. However. pulse widths. If not. However.50 S. A problem that has plagued concurrent fault simulation. and having a working model. a heuristic approach has been developed which appears to help control this phenomena. the decision is first made as to whether a sufficient amount of space is still left. The J-K flip-flop is a very basic functional element. At any time that a fault is to be inserted. the amount of storage needed in a given simulation run is dynamic and unpredictable. when compared with run times of the same networks simulated in an existing parallel fault simulator. The first of these . It has been determined. In some cases. is that each individual fault effect is handled separately. that a large percent of the time for concurrent fault simulation is spent in searching fault lists. incorporates the necessary timing information. From this work. SZYGENDA and operational timing parameters like setup/hold times.A. To accomplish this. and simulate the device's functional behavior. However. Checks for violation of the operational timing parameters can be performed for both the good element as well as the faulty element. based on the input that is active and operational timing parameters like setup/hold times and pulse widths. The results of this analysis. HIERARCHICAL METHODS FOR GENERATING TESTS FOR SEQUENTIAL LOGIC NETWORKS. The goal of this work was to develop a test generator that would solve this problem. and be adaptable to VLSI networks. if it is distinguishable from the good machine. etc. or inaccuracies will result. Once a fault has been injected.if it is possible to determine how much storage is left at any point in time. in the experiments that have been run. no new fault is inserted into the network. for LSI networks. in the experimental verison of the simulator only stuck-at-faults and timing faults were implemented. through the analysis of the existing experimental model. this solution seems to work very effectively. There are two major aspects of the solution to this problem. Furthermore. However. The fact that it is handled separately is the very thing that allows for these faults to be simulated at an accuracy consistent with the functional models. IN A SIMULATION ENVIRONMENT Generating diagnostic tests for sequential logic networks is a major problem of the integrated circuit industry. This is exactly what was accomplished. V. has been that it requires large amounts of storage. as well as an element. will direct our future experimental work in this area. and also to the storage requirements. that fault's effects must be continuously modeled until it is detected. the data structure that defines a fault. Functional elements are modeled independent of the device's internal gate configuration. it is also very costly in terms of space and speed. the concurrent fault simulator was more than twice as slow as the parallel fault simulator. The trend toward highly sequential LSI and VLSI integrated circuits has made existing methods questionable. (as is true of deductive). it can be seen that the problems with concurrent simulation are primarily those of speed and storage. This is accomplished by the use of an indicator for each fault to determine whether it has ever been injected into the network. This is not a foolproof plan. The preprocessing obtains information about the topology of the network. for fault analysis. the accuracy of the tests. It seems reasonable to expect that design verification tests. There is a conflict module that checks for two different values being placed on one signal. Algorithmic Test Generation This part of the system was designed on a modular basis. It also determines the sequentialness of the network. As each element is assigned to a level. If this occurs the backtrack module will retrace this part of the path to allow the conflict to be resolved. The user can control the process to produce very accurate tests at a relatively higher cost or tests that might be accurate at a lower cost. The path sensitizer module will drive this signal to the desired value. the cost effective use of manually generated design verification tests. It uses the element evaluation module to set values on each element involved in the path or line justification. there must be a feedback loop in the network. . the sequentialness information is updated. An element is then assigned to a level if all its fanins have been assigned to previous levels. as the sequentialness increases the use of a test generator that considers the topology of the network becomes essential. or any combination thereof. The second aspect of the solution to this problem is the actual generation of the diagnostic tests. This includes a count of all the memory elements in the network and the maximum (minimum) sequentialness for each element. When a situation occurs such that not all elements have been assigned to levels but none of these have all their fanins in previous levels. All primary inputs are assigned to the first level. heuristic methods have a high probability of detecting a sufficient number of faults. It starts at the signal and drives backward through the net to the primary inputs. Therefore. that truely test the design. would also provide a high level of fault coverage. heuristic or algorithmic path sensitization. All this information is used by the test generator. That element is assigned to a level and the process continues. The methods chosen to do this consist of user selectable options. The element evaluation module makes use of the information found in preprocessing to determine the input values for each element. The hierarchy of uses involves these two modules. Table 2 gives some results of the leveling and test generation process for a few networks. The preprocessing produces a leveling of the network and the identification of all feedback loops. It can produce a sequence of patterns for sequential networks. For a network with few feedback loops and a small amount of sequentialness. This is the maximum (minimum) number of memory elements in a path from that element to a primary input of the network. The algorithm to perform the preprocessing is as follows. There is a control module that accepts user input and directs the test generator to the signal and its value to be path sensitized. including manual. Manual and Heuristic Test Generation As digital simulators are used more extensively for design verification. First. The amount of conflict checking and backtracking allowed controls the speed. the use of tests generated in this manual manner would certainly be cost effective. However.SIMULATION OF DIGITAL SYSTEMS 51 is a preprocessing of the network. the modified single dimensional path sensitization technique will be considered. The user can use the feedback and sequentialness information to determine the best method for test generation. becomes more attractive. A depth first search is performed on one of the unassigned elements to locate the loop. and the memory space needed. This information is useful to the test generator and to the user controlling the test generation process. it approaches the network through simulation. one additional comment should be forcibly made. Heuristics provide a rapid test generation capability. those that are found to be the most successful are "popped" to the top of the stack. In addition. For large n this network would require extensive run times for automatic test generation. Another technique. .A. This simple observation could result in considerable cost savings.52 S. upon which tests generated. What is simply suggested is the intelligent use of whatever tools are available to perform the desired function. or reduce. beyond that achievable through manual and heuristic approaches. Before concluding this section. Test patterns which detect no faults are discarded. in the most cost effective manner possible. In other w o r d s . COMPARATOR SHIFT REGISTER COUNTER 9 1 2 3 1 2 3 1 2 3 6 12 18 10 20 30 10 20 30 70% 82'. this approach is simply another tool that should be available for efficiency. However. in terms of faults detected. 90% 7 10 13 22 40 60 18 31 45 25 63% 77% 93% 80%' 23 67% 89% * A limit of 60 CPU seconds was used for these examples. utilizing any techniques. with varying degrees of fault coverage. Usually. Again. consider an n input 2 n output address decoder. This approach does not involve generation of tests for specific faults. w e may suggest that the best test generator is a highly accurate fault simulator. SZYGENDA TABLE 2 ALGORITHMIC LEVELING AND TEST GENERATION TEST CIRCUIT # OF LEVELS i OF PASSES # OF TESTS 16 32 48 TEST COVERAGE 79% 85% 92% CPU* TIME (SEC) 22 49 60 ALU 1 1 MAG. many networks can be simply analyzed manually to determine a testing sequence. the need for expensive automatic test generation runs. used for diagnostic test generation. This would eliminate. then. For example. we can simply deduce that the only 1 0 0 % test coverage set would require all possible 2 n inputs. Since present day test generation systems use techniques and models far less accurate than what is achievable in present day fault simulators. the algorithmic capability can be used for those faults. is heuristic test generation. Many similar examples exist. it is essential that fault simulation be performed in conjunction with test generation. If additional fault coverage is necessary. However. The suggested use of manual generation is certainly selective in nature. It should not be construed as a suggestion for the exclusive use of manual generation. can be validated. for continual use. utilizing this philosophy would result in using the more expensive algorithmic approaches only on a small subset of the total faults in the net. since we can have a stuck-at-1 or stuck-at-0 on any of the 2 n outputs. if accuracy is necessary. rather. Numerous heuristic techniques are possible. The University of Texas. 1975 (1975) 105-113. (1976). Three Levels of Accuracy for the Simulation of Different Fault Types in Digital Systems*.D. Hopefully. A Ph. 12th Design Automation Conference Proceedings. Proc. Procedures for Functional Partitioning and Simulation of Large Digital Systems.A. is rapidly becoming extinct. |4| M. Dr. (1971). (Dec. J. A. of the IEEE Conference on Systems and Circuits. Mr. Thompson and S. |2| E.SIMULATION OF DIGITAL SYSTEMS VI. 1977). Mr. We must realize that these problems do indeed exist and must be solved. Szygenda. California. The Evolution of Functional Simulation from Gate Level Simulation. June 23-25. Proceedings of the 14th Design Automation Conference (June 1977). The luxury of reverting to manual analysis. that existed in the past.W. Mr. Computer Science Press. I have described the present state of the art in this area. Smith and Mr. Thompson. Breuer and A. Bose and S. Bose. References |1| S. Woodland Hills. Detection of Static and Dynamic Hazards in Logic Nets.A. E. M. B. Read.A. However. I would like to acknowledge some of my former and present students and industrial collegues that actively worked on these topics including: Dr. Acknowledgements Considerable portions of the work presented herein is of an unpublished nature. The breadth and complexity of these topics prevented detailed discusions.K. and some of the numerous additional unsolved problems that remain to be investigated. Bose.A. The necessity for solving these problems is being prompted by present day and proposed device technology and fabrication techniques. Dissertation. Szygenda. P. Friedman. D'Abreu. Szygenda. |3| Ajoy K. .D. |5| A. Karger. CONCLUSION 53 In this paper I have attempted to cover a very broad range of techniques and considerations involved in the areas of digital logic simulation and diagnostic test generation. Diagnosis and Reliable Design of Digital Systems. Computer Aided Design Journal. "MNFP-A New Technique for Efficient Digital Fault Simulation. Schowe. Pellegrin and A. 53. Szygenda.A. pp 145-150.. "Fault Test Analysis Techniques Based on Logic Simulation-FANSSIM. and R. 8. S. and T." Proceedings of the 14th Design Automation Conference. Chang." Proceedings of the 10th Design Automation Workshop. H.K.G. 15. "The Concurrent Simulation of Nearly Identical Digital Networks. S... Jr. Schuier. 1973 and IEEE Group. "An Efficient Method for Fault Simulation for Digital Circuit Modeled from Boolean Gates and Memories. P. Τ. Baker. pp 111-115.A. S. Breuer." IEEE Transactions on Electronic Computers. Science Press Inc. Dekker editor.E. pp 1431-1449. 11.54 S. J. April 1974.M."Functional Simulation in the LAMP System. "Modeling and Digital Simulation for Design Verification and Diagnosis.. April 1977. E.Y.K. and S. Lekkos. "TEGAS2—Anatomy of a General Purpose Test Generation and Simulation System for Digital Logic.A. June '77. Bryant and E.B.A. 6. December 1976.D." Design Automation Conference Proceedings. 1972.. 10. D.. Ulrich. 13.. of Design Automation Workshop. .A.G. S." Design Automation Conference Proceedings. "The Evolution of Functional Simulation from Gate Level Simulation.Y. Menon. Chappell.M." 9th DA Workshop. "Digital System Design Automation: Languages." The Bell System Tech." Proc. June 1973.W. Smith. Melvin Α. and A. S. L." 9th DA Workshop.. Abramovici. "A Computer Program for Logic Simulation. Bose. Baker. "Concurrent Fault Simulation and Functional Level Modeling.A.G. SZYGENDA Selected Simulation Bibliography 1. Simulation and Data Bases. 9. Schmidt. 1974. Fault Simulation and the Generation of Tests for Digital Circuits. Schuier.A.A. 5.P. '75.." Proceedings of the 1977 IEEE International Symposium on Circuits and Systems. October 1974. J.F. Elmendorf and L. 1977. and S. 4. C H .. pp 1451-1476." Proceedings of the 13th Design Automation Conference. and R. Breuer and K. Walford. A. Vol.G. Szygenda. 1972. G.. Szygenda." Comp. D. S. 7. M. 12. "LAMP: System Description. Vol. Cleghorn." Simulation of Systems. "Integrated Techniques for Functional and Gate Level Digital Logic Simulation. June 1976.W. S. "Logic Circuit Simulation" Bell System Tech.A. and E. North-Holland Publishing Co. 3.R. California. 1977. S. "Detection of Static and Dynamic Hazards in Logic Nets. Chappell." accepted April 1977. Woodland Hills. 2. Kumar. Hong. Ulrich. Szygenda. pp 453-459. J. M. Thompson... 14. pp 116-127. Ulrich. Szygenda. 53. 1976. Szygenda. SCANLAN.TECHNICAL SESSION II Chairman: J. Ireland . Dublin university. . EAEC. Preprocessing deals with the algorithmic analysis of a circuit to gather information to make test generation more efficient. (2) manual selection based upon functional performance. 1979 NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Mel vi η Α. The proposed system uses a number of schemes for deriving tests. and (3) algorithmic. Most of the work which we will discuss deals with the concept of external testing. CÂACWLU and iyitemi Horth-HoWwd Publiihing Company © ECSC. California 90007 USA Abstract In this paper we first briefly review the current state of the art of automatic text program generation systems. B reuer Associate Professor of Electrical Engineering & Computer Science University of Southern California Los Angeles. We next present high level models for a counter which can be used by a test generation algorithm. In this paper we deal mainly with the problem of automatic (algorithmic) test generation. (2) algorithms for evaluating their function. 1■ INTRODUCTION This paper deals with several aspects of the design of software systems which aid an engineer in the development of fault detection and diagnostic tests for com plex digital systems. We describe in detail our results in two main areas. COMPUTER-AIDED DESIGN oi digital ele c troni c . such as (1) pseudo random methods. i. 57 . namely preprocessing and functional modeling. Such a system is often referred to as an Automatic Test Program Generation (ATPG) systems.C. Tests are evaluated via a concurrent fault simulator which employs both gate and higher level models. These models are expressed by (1) a set of dcubes. We then report on some of our current research dealing with the development of a very powerful computer aided system for the generation of tests for complex digital systems. Bnuiieli S Luxembourg. and (3) a high level language · for describing solutions to problems to be solved. We present results on two preprocessing concepts namely rate analysis and cost analysis. tlu>oue. Preliminary results appear to indicate that these techniques w i l l lead to much greater efficiency in t e s t generation since they d i r e c t l y attack the problem of buried f l i p f l o p s as well as produce a more e f f i c i e n t search procedure f o r a t e s t .e. We show how the preprocessing techniques are used by our test algorithm as well as interact with our functional models. EEC. editou. We will present a brief stateoftheart review of ATPG systems as well as discuss some of our more recent research results on this subject. This latter result has already been used successfully in the area of design for testability. testing by a piece of automatic test equipment (ATE).. All three techniques appear to be necessary in order to produce a cost effective system. either functional or algorithmically generated tests will suffice. CIRCUIT ^=D5>-·^.58 rather than s e l f testing systems. to be discussed later. Typical ATPG system. GOOD CIRCUIT SIMULATOR FAULT SIMULATOR DICTIONARY CONSTRUCTION <7 Figure 1. and process the input data with respect to library data. TESTS J MANUAL (FUNCTIONAL OR CONSTRUCTIVE) ALGORITHMIC OR HEURISTIC ~1 RAM PATTERN GENERATOR ETC. are cost and rate analysis. W e depict a typical ATPG system in Figure 1. The most complex part of an ATPG system is test sequence generation.Α. many aspects of the former mode of W e w i l l also assume a classical fault Again testing are applicable to the l a t t e r . Functional tests based upon the specs of a system provide an initial good test . For complex systems a test is very difficult to construct and numerous aids must be available to the test engineer if he is to construct an acceptable test at a reasonable cost. model in most of our discussion. Μ. INPUT PREPROCESSING LIBRARY FAULT ANALYSIS COST ANALYSIS RATE ANALYSIS TEST SEOUENCE GENERATION *ö I LIBRARY OF PARTS. many of the results presented are applicable to a more general f a u l t model. collapse faults. Two new preprocessing concepts. MODELS. TEST TAPE FOR ATE Here the major function of the input preprocessor is to carry out certain syntactic and semantic checks on the circuit definition. BREUER However. namely single permanent s t u c k a t f a u l t . For simple circuits. N E W CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS set.91. Hence the generation of tests f o r This problem is being An example of complex sequential c i r c u i t s remains an open problem. the l a t t e r being a modified version of the D-algorithm. Circuits containing several thousand gates can be easily processed in a few minutes of CPU time.a t .f a u l t s 18.1 Brief Review of the Current State-of-the-Art Test sequence generation Most research in the area of t e s t sequence generation has been primarily concerned with gate-level combinational and sequential c i r c u i t s using the permanent single stuck-line f a u l t model. Tests are usually evaluated using simulators. the former is the work of H i l l and Huey [15]. several techniques exist [13. the problem of generating tests f o r stuck-at f a u l t s in combinational c i r c u i t s has essentially been solved.6. Except f o r the l a t t e r case. such as multiple s t u c k . A register transfer type model is used. The area of design for t e s t a b i l i t y is not the subject of this paper. The search for a t e s t then proceeds W e believe that t h i s is a f r u i t f u l approach to pursue. but for large classes of c i r c u i t s these techniques are computationally i n f e a s i b l e . and by simplifying the problem via design for t e s t a b i l i t y . is the most e f f i c i e n t procedure for generating tests f o r combinational logic c i r c u i t s . For t h i s model. i n t e r m i t t e n t f a u l t s [45-48]. The main problems in simulation appear to be excessive run time and model construction f o r large (LSI) modules such as microprocessors. It appears that the D-algorithm. Again. Researchers have also studied other f a u l t s modes.71. The most common techniques are the Boolean Difference method [ 1 ] .14]. RAM's and PLA's . ROM's. 59 Additional f a u l t coverage can be obtained by employing random or algorithmic test generation methods.3] and the LASAR algorithm [ 4 ] . in i t s original or modified form.12]. In this work a system is p a r t i - tioned into i t s structural data processing portion and i t s control p o r t i o n . the combinatorics of these problems make most of these attacked in two ways. using both portions of the model. For sequential c i r c u i t s the t e s t generation problem becomes much more d i f f i c u l t . 1. [10] and delay f a u l t s [11. namely by the development of more powerful t e s t generation t o o l s . and eventually functional information concerning what a c i r c u i t is i n tended to perform may be included in such a model. Variations and modifications of these methods exists 15. bridge f a u l t s results primarily of academic i n t e r e s t . but we should . the D-algorithm [2. by probing.Α. testability considerations in design are essential if future systems are to be maintained at a reasonable cost. or that random Neither assumption is particularly satisfying from a practical viewpoint. The test generation cost are usually less than those generAlso high testing rates are achievable. Rather. Our analysis of this effort indicates that this approach is viable for small and moderate size boards. then the ATPG system as shown in Figure 1 is not employed. this approach appears to become impractical. Also. complete test sets are difficult to generate. while random test generation cannot guarantee complete fault detection. BREUER Because of the in- mention that considerable effort is going into this area. Most notable results deal with transition count testing and Hewlett-Packard's signature analysis technique [21]. Transition counting has been analyzed from a deterministic viewpoint by Hayes 122-241 and from a probabilistic viewpoint by Parker [25] and Losq [20]. times be a problem. it has been shown that exact synchronization between the circuit under test and the "golden-board. These studies have assumed either that complete test sets are available. namely what data need be observed from the unit under test. and have been studied quite extensively by researchers [16-20]. Random test generation methods have been successfully used in industry for moderate size circuits. Finally. ated by an ATPG system. These methods are potentially useful because of the small amount of hardware and software required to implement them. initialization or synchronization can someHazards and races can also lead to discrepancies Interestingly. is often not necessary [201. . test patterns are used.60 Μ. tical nature 130-341. When long tests are used. creased complexities of digital circuits and systems. Various generalizations of transition counting have also been proposed [26-291. the tests are generated on line by the ATE and Fault dethe results compared to either a "golden-board" or to signature data. there has been considerable work carried out in the area of generating tests for solid state RAM's. the latter being intended specifically for systems containing microprocessors." The use of on-line random testing has lead to several studies in the area of compact testing. For complex tection and diagnostic data is usually obtained by physical fault insertion and boards and microprocessors where the ratio of pins to gates is low. say several million. This work is both of a theoretical as well as prac15 3 Tests of complexity 0(n ' ) to 0(n ) have been developed. i t can present. the l a t t e r technique outperforms the former. 1 . x ) . run time. In our analysis we have assumed a f a u l t detection model as shown in Figure 2. Percent detection vs. Assumptions: (a) (b) (c) (d) average event activity . the most general and fastest form of f a u l t simulation. 100 I faults detected 0.10% 50 1851 instructions required to process (evaluate and schedule) one event in a parallel (concurrent) simulator host machine characteristics .1 Test length χ 1/N x 1 Figure 2. W e have recently developed an approximate analytic model for estimating the run time for parallel and concurrent simulators and havefound that the l a t t e r also outperforms parallel simulators. 1381 have studied the r e l a t i v e run times of parallel and deductive simulators and have come to the conclusion that f o r large c i r c u i t s . logics For 61 years researchers have been investigating and developing new techniques to reduce Most simulators are table-driven event directed and employ multi-valued The three most common f a u l t (usually 0 . and employ p r i m i t i v e gate elements having several delay [351. Chang et a l . test length For the following set of assumptions we have obtained the results shown in Table I. deductive 1361 and concurrent f a u l t variables. simulation techniques are parallel simulation 1371.NEW CONCEPTS IN AUTOAMTED TESTING OF DIGITAL CIRCUITS the former being feasible f o r 64K RAM's. at However. lead to serious memory requirements. Fault Simulation One of the major problems in f a u l t simulation is excessive CPU time.1 MIPS and unlimited core 36 faults processed in parallel . such as separate rise and f a l l times. W e believe that concurrent simulation i s . 10 Parallel 4 Concurrent C D = = £ D = 282 6.g. e . gate equivalences/ c h i p ) . f l i p flops which are hard to control or observe.f l o p s . This has made our classical gate and latch c i r In addition. is l o s s . and (3) to employ higher level of integration increases and the pin to gate count decreases. Inherent in the use of LSI and VLSI is the l a c t of knowledge of the detailed logic d e f i n i t i o n of these chips. level models. (3) problems associated with timing such as races and hazards. BREUER (e) number of gates in circuit . 1. and is impractical for most c i r c u i t s . . i . (2) to design for t e s t a b i l i t y . Test sequence generation problems For large complex c i r c u i t s there are several specific problems which make t e s t sequence generation p a r t i c u l a r l y d i f f i c u l t .700 2.130 = = Β A C Β D == . The l a t t e r problem occurs most frequently when large These problems become more serious as the In some counters and s h i f t registers are used. and (4) buried f l i p . c u i t models somewhat useless. cases s e l f testing rather than external testing techniques must be applied.10 (e.62 Μ. namely c i r c u i t "test complexity" and modeling. information on f a u l t modes and timing Hence classical test generation and simulation approaches are becoming Three potential solutions to these problems are (1) to employ functional level t e s t i n g . namely (1) modeling (delays and f a u l t modes).Α. (2) i n i t i a l i z a t i o n . less applicable. a VLSI chip) (f) number of vectors in test . over the last few years we have witnessed a doubling in chip density every This growth is predicted to continue for at least the next 8 years. Estimated simulation time (hours) for f a u l t simulation. W e have studied the problem of automatic i n i t i a l i z a t i o n and have found no pract i c a l solution except to design c i r c u i t s for easy i n i t i a l i z a t i o n . With the evolution from SSI through MSI and LSI to VLSI (10 two years. The most dramatic development has been the emergence of microprocessors. The exact analysis of the i n i t i a l i z a t i o n of f a u l t y c i r c u i t s requires a multi-valued algebra .2 Problem Areas There are two major problems related to the area of ATPG.= 45 340 fi 6 Table I .25 45 43 no f a u l t dropping f a u l t dropping A = Β A 12.. by the binary sequence 101010. implement these concepts in a system called TEST/80. lasting η units of time. = t. In this section we will outline some of our recent work in this area.1 Extensive preprocessing High level primitives Accurate timing analysis Concurrent f a u l t simulation. e . 3. QB> Qc represent the outputs of the f l i p . that a l l rate expressions are in terms of this u n i t . aid in assigning consistent logic values to l i n e s . g . (H) f o r time t .NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Problems of timing in test generation can be handled by using more accurate models.f l o p s Then the signals on these lines along with t h e i r corresNote that rate expressions ponding rate expressions are depicted in Figure 3. = 1 u n i t . . The rate associated with a l i n e can be represented by a regular-expression-like notation.. shown in Figure 3 can be denoted. We plan to The major aspects of this Preprocessing By preprocessing we mean the generation of data about a c i r c u i t that can be used l a t e r by a test generation algorithm to make the process of generating a test more e f f i c i e n t .1 Rate analysis Rate analysis is a preprocessing technique in which some lines in a c i r c u i t are assigned l a b e l s . techniques: 2. . These labels indicate the maximum rate at These rates can be used by an ATG algorithm to At present we are pursuing two specific preprocessing rate analysis and l i n e cost. 63 Here infor- mation on races and hazards are tracked during test sequence generation in order to produce more valid test sequences. Some results in this area are presented by Breuer 139.. and low (L) f o r time t .401. . 2.. 2. system are outlined below: 1. In Figure 3 we i l l u s t r a t e a few timing C is a clock pulse assumed to be high W e assume that t . 4. can be operated on via logical operations. the availability of an ATPG system having a powerful algorithmic test sequence generation system and fault simulator will continue to be a useful tool for the foreseeable future. 2. SOME ASPECTS OF AN ADVANCED ATPG SYSTEM Though design for testability is becoming a necessity. and Hence the clock sequence diagrams and t h e i r corresponding rates. e . called rates. g . (0 1) = 001001001. We will address the problems of modeling and test generation efficiency in the next section.1. f o r brevity i t In general l n ( 0 n ) represents a H(L) signal value Let Q.... the rate on the output of the . in a modulo 8 counter. is represented by (10) . . which a l i n e can change values. BREUER + AND gate forming QftQB is ( 0 6 1 2 ) + · ( l V ) = (0612)+. Example of signals and rate expressions. If A is a signal line. i t is clear from rate analysis that t h i s assignment i s not possible. For example. H (IO) + = Ι Ο Ι Ο Ι 0 — (0 2 l 2 ) + = O O I I O O I I — C P ¡ . Assume that Qc = QA · QB i s the clock input to some device. (0 S I 2 ) + QA'QR Ί I 1 I L· figure 3. The maximum rates are (10) + and ( 1 0 ) + . The AND and OR of two rates a and b are denoted by a · b and a + b. Rates can be propagated through circuit elements. namely constants. used by such techniques as the Dalgorithm. J r The complement of a rate a is obtained by changing all O's to l's and 1's to O's. if C = A · B (AND gate) then C r = A p · B r > where C r = B p if A r = 1. respectively. and are also rates.Α. and C r = 0 if A = 0. . W e normally denote a rate (a) by simply w r i t i n g a. i f one can predict as early as possible those sequences which cannot be achieved.64 Μ. and the minimum rates are 0 + and 1 + . Note that a sequence of the form 081 0 can be achieved W e say that on Q p by simply i n h i b i t i n g the clock during some clock periods. I f during test generation i t is desired to apply the input 010 to t h i s f l i p f l o p . then the rate associated with the line is denoted by A . such as a f l i p f l o p . ( 0 6 l V covers 0 8 1 3 0. Hence enormous CPU time can be saved i n endless backtracking. and is denoted by I. _TLJn_JT_jn_jn_Jl_jn_jn_JT_ QA "1 ~1 ~L I I I I 4 I (0 J ) 4 + I 1 I I L L_ Q Β 1 (08|8)+ Qr. Hence we can easily bound the rate at which this flip flop can change state. i. If processing a term of the form (. then the rate of the positive edge is ((C (+)) r = 0 n l n . data outputs Q. and replace it by (p) . we are currently designing an algorithm for generating the maxi mum rate of each line of the circuit. (Q ) = 0 4 q l 4 q . In a similar fashion we can compute the rates at the output of shift registers and counters. Then the output Q has rate Q r = 0 a l b where (1) a. + η. and we are currently investigating methods for approximating these expressions by simpler ones. For J = K = 1. As an example ( 16 .) = 0 q l q .6) _ 0 16 ( 0 2 1 6 ) 2 > apd n 16 {() 2 . As a simple example. To compute Z r = X r · Y r find a prefix of Y r of length n·. consider a mod 16 counter having an enable line E. (C ) r = 0 n 3 l n 4. as long as K = 0 the flip flop cannot be reset. b 2i η. i. y € { 0 . Q ß . as long as J = 0 the flip flop cannot be set (3) b >. and K p = 0 n 5 l n 6. and C r = 0 1 6 q m A Given a circuit. 1} and n ] > m.NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS 65 Let X r = χ η 1χ η 2 χ η 3 .. Q = 0 1 .e. a clock line C . Normally we select the input to these devices to have rates such that the output rates are maximum. If χ = O then output θ"1 else output the prefix. For maximum rate activity we set E r = 1. We denote this condition by C (+).. counters and shift registers. ( 0 2 . feedback and reconvergent fanout.. such as flip flops. Consider a device triggered by a positive edge on the clock line. + η».. say p.) n . relabel if necessary.e.. (Q ) = 0 8q l 8q . Consider a JK flip flop where J r = θ"1 l"2. Q D .. The result of concatenating the output data is Ζ . For (C 1 = 0 n l m (q = η + m) we obtain (Q.16) . Return to the new rates X and Υ . n 5 .6)2 . where η = η. i. (QJ = 0 2 q l 2 q . wheren = n| + n ? . Primary inputs usually are assigned the rate (01) ....... the complexity of rates can increase quite fast. w h e r e χ..e. Our major difficulty to date deals with simplifying complex rate expressions. Rates are usually created at the output of sequential devices. and carry output C = Q A Q B Q C Q D · E · C . The major application of rate analysis is the identification of allowable . then take the result produced. Delete x ni from X r and the prefix from Y . Q c . and Y f = yml y m 2 / 3 .. and repeat the procedure until both have been processed to the end. Q1 _ n l 6 ( n 2 ( 0 1 )3}2_ fts can be seen. and this is the maximum rate for a flip flop. If (C ) = 0 n i l" 2 . the output cannot change faster than the input period (2) a > n. 66 Μ.Α. BREUER sequences on lires. As an example consider the circuit shown in Figure 4. ι t τ t +3 Figure 4. Portion of a circuit. are driven by large c Assume E, C and J are combinational circuits and that their rates are not know. Let K be a primary input. For the counter, the input rates which produce 31 maximum output rates are E = 1 and (C_)_ = 01. This gives us Cr = 0 1 . For " 31 the JK f l i p - f l o p , we set J = K = 1 , and since (C ) = 0 1 , we have that Q = 3? 3? We assign rates as follows: " r 0 1 . Assume that we desire to construct a test for the fault A s-a-0. problem we have the following subproblems: 1. 2. Set A = 1 To solve 1 we have subproblems: t: b) 3. 4. c) Cq= t(C q (t) t - 11 1, cq(t 0 ) , J = 1. t - 1: J = 1, Q = 0, C = 0. t' < t - 1: To solve this To solve the problem Q = 0 we have: 0. By implication, we see that R = 0 also resets the counter and we have C = 0. Time: t' ··· ··· (t - 1) 0 t 1 O31 1. Therefore, for t ' = 1 , we have In summary we have, Line C: 0 But rate analysis implies that C ( t - 1) = 31 and t required. 2.1.2 Cost analysis 32. Therefore at the very least a 32 clock time test is Our proposed test generation algorithm is similar to the D-algorithm in that i t employs the concepts of l i n e j u s t i f i c a t i o n , D-drive and implication. former two concepts usually imply choices. problems on which to work. The Our t e s t generation algorithm can be modeled as a search procedure, where at any instant of time one has numerous subThe order in which these problems are selected can Cost analysis deals with the greatly affect the run time of the algorithm. N E W CONCEPTS IN AUTOMATED TESTING OF D IGITAL CIRCUITS concept of assigning costs to subproblems. t o t a l run time w i l l be reduced. 67 By selecting the order in which sub problems are processed based upon some function of cost, i t is hoped that the The problem is to construct an adequate cost In t h i s section we function which w i l l lead to a reduction in computation time. w i l l discuss some of our results to date. Rutman 1411 has introduced the concept of assigning three l i n e cost values to each l i n e A in a c i r c u i t , namely cA the cost of setting l i n e A to a 1 , c i tile cost of setting l i n e A to a 0, and dA the cost of d r i v i n g a D (ÏÏ) on l i n e A to a primary output. Here D denotes an error being propagated to an output [2 Rutman's ATG In general, problems of higher cost are more d i f f i c u l t to solve. technique. system f i r s t calculates these costs, and then uses them to guide his search Unfortunately his r e s u l t s , based upon three test cases, provided inconclusive support for t h i s technique. * W e believe that the main problem with By "side e f f e c t s " we Rutman's cost function is due to side effects and fanout. satisfy one specific requirement. mean the effect on other devices which occur when we set a l i n e to a value to For example, assume we desire to change the W e can least s i g n i f i c a n t b i t of an η b i t counter by incrementing the device. show that when the counter i s incremented, the expected number ET(n) of f l i p 1 1 flops which change state is given by the expression E,(n) = E.(n 1) t , = 2 — — r . For a s h i f t r e g i s t e r , a s h i f t operation w i l l e f f e c t , on the average «η- ι Ες(η7 = n/2 f l i p - f l o p s . To drive a f l i p - f l o p to a 0, the "cheapest" solution may But this would reset a l l the f l i p For each p r i m i t i v e W e w i l l now present be to set a 0 on the master reset l i n e R. required. flops driven by R, and t h i s could then eliminate state settings at 1 which were W e have modified and extended Rutman's work. element in our system we have derived an equation which determines the cost of each output l i n e of an element given the input l i n e costs. some of our r e s u l t s . The cost cA is given by the equation cA = min (cfA t csA t cdA, K), where be 32,000. A is the output of an element of type t and K is an input parameter usually taken to The term cfA i s that cost contribution due to the logical properties A similar equation of t ; csA is that cost contribution due to the side effects of setting A to 1; and cdA is a constant cost associated with the element type t . holds f o r cA. Calculation of cfA and cfà Gate functions: the output be A. Consider a gate having input lines i = 1 , 2 η. Let Then we have the following r e s u l t s . * Private communication. 68 AND gate A = 1.2 ··· η η Μ.Α. B REUER cfA = y ci i=l A =Tt2 + cfA ■ min i (each input must be a 1 ) + η {cT) (at least one input must be a 0) NAND gate Interchange A and Ã. Similar equations hold for a OR and NOR gate. We represent a sequence of input vectors by X(l), X(2), X(3) Then the cost of this sequence is the sum of the cost of each vector. Also, since C (t) = (C = 0 ) , (C = 1 ) , then cC p (+) cC Ρ + cC . Ρ F l i p f l o p (JK positive edge triggered; S, R asynchronous): Q = J(Q + K) S R C ( t ) + SR Q + K(Q + 3) S R C (t) + S R Comments cfQ = min (cS + cR, cJ + cK + 2cS + 2cR + cCp + cCp , cJ + cQ + 2cS + 2cR + cC + cC ) Ρ Ρ cfQ = min (cR + cS, d i r e c t set » set F/F set or t r i g g e r cK + cJ + 2cS + 2cR + cC + cE , cK + cQ + 2cS + 2cR + cC + cC ) Counter: E i P A P L B P C P D Cr ip_ \ B ° % % —> T QA QA ï< reset and increment load a 1 Q. = 0 and increment reset load a 0 Q A = 1 and increment. R = (RL, RLE C p (t)) ♦ RCPA + RLE C (t)QA ■ RL + R:P A + RLE Cp(t)QA NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Therefore cfQ. = min (cR + 3cL t 2(cR + cE) + cC + cC„ , " Ρ Ρ cR + cL + cPft, 2(cR + cL + cE) + cC + cC. + cQ„); P P M 69 Q B = RE P B + RLE C (+)QA Q B = RL + R[ P B + RLE C (+)(QAQB) etc. load a 1 Q = 1 and increment ; A reset load a 0 Q = Qg = 1 and increment; A C r = E Cp QA% % % cfC r = cE + cC p t cQ A + cQ B + cQ c t cQ D . In similar fashion, cost equations for other logic units can be derived. Computation of side effects factor csA When a l i n e is set to a 0 or 1 , due to i t s fanout t h i s l i n e setting may affect the logic value of many elements. effects. Consider a l i n e which affects only gates and f l i p f l o p s . can be computed as follows. Then the value of csA Simulate The W e refer to this phenomenon as side Let l i n e A have logic value 6 e { 0 , 1}. the c i r c u i t with the i n i t i a l condition A = 6 and a l l other lines at x. contributions to csA are as follows: (1) (2) (3) a l i n e B (output of a gate) set to 0 or 1 contributes' 1 ; a gate B having m binary inputs and an output at χ has a contribution of m/n, where η is the number of inputs to gate B; i f a f l i p f l o p Β is set, or reset, i t s contribution is 4. Again, t h i s concept can be extended to more complex elements, such as s h i f t registers and counters. For example, f o r a counter, i f we set R = 0 (reset) we W e thus have that csR = can assume that half the f l i p flops w i l l change s t a t e . the counter. j (4 + G) where G is the average side effect associated with an output l i n e of When a counter is incremented, E(n) f l i p flops may change s t a t e , In Figure 4 we indicate the flow hence the side effects cost is E(n)(4 + G). chart f o r assigning the costs cA and Cà to a l l lines in a c i r c u i t , where we assume that the rules f o r calculating the costs f o r each p r i m i t i v e element in the c i r c u i t are known. I f A is a primary input, cfA = cdA = 0. The algorithm starts at the primary inputs to a c i r c u i t , calculates these costs, and then proceeds to 70 Μ.Α. B REUER process the elements to which these lines fanout. B ecause of feedback, some elements are processed repeatedly until their cost values stabilize. The final cost value is a measure of the control ability of each line in the circuit. Set all line costs to 32.00 Compute costs of all primary input lines (side effect cost only) Put a l l on fanout l i s t s of p i ' s into f r o n t i e r l i s t . Compute cA and cA for a l l outputs of a l l elements in f r o n t i e r l i s t . Assign these costs to the l i n e s . For each l i n e having a new cost, put a l l elements on i t s fanout l i s t into the f r o n t i e r l i s t . D elete from the fron t i e r l i s t a l l elements assigned new costs. ~C Frontier l i s t empty? J yes I D one I Figure 4. Computation of Line Costs cA and cA. Τelements E Computation of D drive costs dA The cost dA associated with a l i n e A i s an estimate of the d i f f i c u l t y of driving an error signal (D ) on l i n e A to a primary output (po). For each p r i m i t i v e element in our system we have developed an equation f o r calculating the D drive cost for each input given the l i n e costs for each input and the D drive cost of each output. Because of the complexity of these equations, we w i l l only i l l u s t r a t e a few simple cases. Let l i n e X fan out to elements Ε,, E 2 , . . . , E p . Then dX = Assume we have already computed the costs dE,(X), which is the cost of driving a D on l i n e X through E.¡ to a po. min {dE } . Consider an element having input A and output X, then dA = dpA + dX + dt where dpA is the cost of propagating a D from A to X dX is the minimum cost of propagating the D at X to a po, and dt is a cost associated with the element type t of E. Usually, dt increases with the number of clock times required to drive a D at A to X. Next we will indicate a few of the equations used to define dA. 1. Primary outputs: dA - 0 i f A is a primary output. 2. Gate functions:" AND gate assume inputs l,2,...,n, output X, and a D on input line i. η Then di = £ j=l,jrl ci + dX 1. NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS This follows since a l l other inputs to the gate must be a 1 . 3. JK f l i p - f l o p with d i r e c t set and reset (negative logic) and positive edge triggering: Case A) Propagation of J = D to output. i 0 71 To propagate a D (or D) from J to Q or Q we need S = R = 1 , Q = 0, and a clock pulse. J = 0 we have Q = 0 , and i f J = 1 we have Q = 1. Therefore, If dJ = min(32000,cQ>2cS + 2cR + cC (+) + dX + 2) In similar fashion we can compute dK,· dS, dR and dC . drive equations f o r counters and s h i f t registers. W e have also developed D- In Figure 5 we indicate the flow chart for computing a l l D-drive costs in a circuit. circuit. These costs are a measure of the observability of each l i n e in a In fact L. Goldstein of Sandia Laboratories has extended our results and has successfully applied his results to the area of design f o r t e s t a b i l i t y , since these costs give the designer valuable information on observability and control a b i l i t y . I Set a l l D drive costs to 32000 1 Set D drive costs of a l l output pins to zero. Place a l l elements which have at least one primary output on the f r o n t i e r l i s t . Select an element E on the f r o n t i e r l i s t having an output with a minimum D cost. Flag t h i s output from further processing. Compute D drive costs for i t s inputs when possible. Add i t s input elements to f r o n t i e r l i s t ( f o r those lines assigned a new c o s t ) . Delete duplicate e n t r i e s . I f D-drive costs f o r a l l inputs to selected element E have been computed, delete E from f r o n t i e r l i s t . ( F r o n t i e r l i s t empty?'V-^ Cost computation Figure 5. 2.2 High Level Models In most Computation of D-Drive Costs. ls com P1ete In the LASAR system 141, the only p r i m i t i v e element is a NAND gate. other test sequence generation systems primitives consists of gate and f l i p flops. In our system p r i m i t i v e elements consists of gates, f l i p flops and higher level functional elements such as counters, s h i f t r e g i s t e r s , RAM's and ROM's. are several reasons f o r taking t h i s approach, namely There Private communication. One could generate a set of cubes [2. i f we want C = 1 then we require A = 0 In general D-drive and l i n e j u s t i f i c a t i o n imply or Β = 0 ( l i n e j u s t i f i c a t i o n ) . and Hold . . For large primitive elements such as s h i f t registers and counters we do not develop a complete set of cubes since they would be too numerous to store and manipulate. . In generating a t e s t f o r a c i r c u i t using the D-algorithm there are three main operations to be carried out.. Breuer and Friedman 1421 have reported on the development of high level models for s h i f t registers and counters to be used during t e s t generation.the contents of the register are set to the values of the data inputs A-. Α . Count down . For a NAND gate having inputs A and B. 3) Hence several of the problems discussed in section 1 are aleviated or reduced in complexity by employing high level p r i m i t i v e s . t i o n we w i l l b r i e f l y summarize this work.the contents of the register are decremented by one. . the cubes are shown below. . namely l i n e j u s t i f i c a t i o n . Parallel Load .31 for defining each of these operations. Consider an η-bit counter which. hence gate and f l i p flop models are not applicable. The major disadvantage of this approach is the time required to develop the model for each p r i m i t i v e function. choices. multiplexers.the contents of the register are incremented by one. d-cubes A B C D 1 D 1 D D D D D 0 X 1 p r i m i t i v e cubes A B C X 0 1 1 1 0 In this sec- Here we see that i f A = D and we want C to equal TJ.72 Μ . in LSI and VLSI c i r c u i t s . namely (U) (D) (P) (H) Count up . etc. and output C. decoders. Rather we use the concept of algorithms and compute our results onW e w i l l i l l u s t r a t e these concepts for a p r i m i t i v e counter element.. s h i f t r e g i s t e r s . the detailed logic of these functional elements is not known. . BREUER 1) 2) most d i g i t a l system consist of the interconnections of function elements such as counters. D-drive and implication. then we require Β = 1 (Ddrive).the contents of the register are unchanged. I f A = 0 then C = 1 ( i m p l i c a t i o n ) . line... A . and higher level models lead to considerable computational e f f i c i e n c y . under normal operation performs four functional operations. The input algorithm mapping table for the UP/DOWN Counter is shown in Figure 6. . Hold or Load (UHP) Down. % 1 0 0 U/D G L 0 1 1 1 1 Algorithm (P) Parallel Load (H) Hold (H) Hold (U) Up (D) Down 1 0 1 0 0 0 Table 2. there are (3) = 81 possible input conditions." 0 0 0 χ χ 0 1 0 1 χ 0 0 0 0 0 χ χ χ χ χ 1 . Hold or Load (UDHP) DHP UHP UDHP DHP UHP UDHP No. Allowing inputs on the control signals to have the values 0.1 . Implication for UP/DOWN counter Implication can be performed using a tabular approach that utilizes an input mapping table which specifies the functional behavior of the device for any values of the control inputs. Down or Load (UDP) Up or Load (UP) Down or Load (DP) Up. L) as specified by Table 2. Input algorithm mapping table for Up/Down Counter. of Input Conditions Uniquely Covered 27 χ . 27 * *·' 1 „ 1 0 0 0 0 0 0 χ χ χ χ χ - . χ 1 0 1 0 0 1 0 1 χ χ 1 0 0 0 χ χ χ _ " \ χ χ χ 0 0 0 χ χ χ χ χ χ χ χ χ χ 1 1 • 27 * 1 ·> Figure 6.χ χ 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 χ χ χ χ χ 1 0 0 0 χ χ χ 1 0 1 0 χ . . Down or Hold (UDH) Hold (H) Hold (H) Up o r Down (UD) Up (U) Down (O) Up o r Hold (UH) Down or Hold (DH) Up.0 χ . The table maps each of these 81 input conditions into one of 15 possible algorithms which correspond to the union of the four basic operations previously listed. Input codes for normal functional operations.NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS 7 3 The functional behavior of this device is determined by the values of four control signals (? . G.χ. Hold or Load (DHP) Up. Ρ U/D G L Algorithms Load (Ρ) Up. Down. U/D.1. Down or Hold (UDH) UH DH UDH DH DH Hold or Load (HP) HP Up. Y = (χ 1 0 χ χ χ) and y = (0 0 1 χ χ χ ) . B2 = ÍU). Let Β = {H.D} be a set of four p r i m i t i v e functional algorithms From the values of the control inputs (C . all bits (if any) to the right of and including the least significant χ become χ. Heß ¿ Specifically. y i is the current state of the ith flip flop in the register.y and A another set of possible algorithms B 2 £_B is determined. Example: Consider a 6 .U/D. H.G. These algorithms are independent of the specific implementation of the UP/DOWN counter. additional values of the outputs The counter algorithms can also be used to determine For the = Ρ. © Α . The table mapping control inputs into algorithms w i l l . . 0 S i ί η. table of Figure 6 a set of possible algorithms B. B REUER Some of the 15 algorithms for the counter are as follows: Counter Algorithms U (COUNT UP) i) ii) iii) iv) all bits to the left of the least significant 0 are unchanged.P. ) 1 π M . and Y. Η (HOLD) Ρ (LOAD) 1) 2) ïj + j . D = U. the inverse relationships are as follows: Η"1 = Η. Ρ Y may then be implied.74 Μ. from the values Y. is the next state of this flip flop. © Y.G. Figure 6. counter.©Y. additional values of the control inputs From the counter algorithms. UDH (UP o r D O W N o r HOLD) a l l b i t s to the l e f t of both the least s i g n i f i c a n t 0 and least s i g n i f i c a n t 1 are unchanged. Λ all i ~7 D cB2 ί Σ D ( (y). Ρ e Β ¿ iff iff iff Σ all i iff (Υ. The table of Figure 6 and the counter algorithms can be used to perform implication.b i t counter and l e t (C . U = D. U/D. D. in general. and a rightmost string of l's (if any) become O's.Α. In the following development. P} and from Y. λ all i ~1 The actual algorithm must be i n the set B ^ B ^ may be implied. 0 S i S η. Y. .) t 1 . be implementation dependent. S i m i l a r l y . Ø y ^ l π Ί all i . From this set and the table of Figure 6 using cubical i n t e r s e c t i o n .L) and the of the counter. Ç Β i s determined. least significant 0 becomes 1 if it is to the right of the least significant χ. UcBo ¿ Σ (U(y). A = (0 1 1 1 0 1 ) . implied values of y and A using the concept of inverse algorithms. B1 = {U. + Α. a l l other b i t s become χ. .L) = (0 χ χ χ) From the table of Consequently.) r 1 . y and A. Σ ( V .U. D D 0 0 0 0 0 0 D D D D 1 1 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 0 1 s< ' Y. j< ι ' 1 1 1 1 1 1 0 0 D D D D 0 0 1 1 . and U/D = 1 Now y = U (Y) = D (Y) = (χ χ χ χ χ χ) and hence no new state Also Y = U(y) = (0 χ χ χ χ χ) implies Υ6 = 0. the data inputs A and the state variables y to the outputs Y. i D D D D D D D D D D D D D D D 1 1 1 1 1 l 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 0 0 0 0 D D D D 0 0 0 0 1 0 1 0 1 1 1 l l 1 S 1 1 D D D D D 1 1 1 1 15 D D D D D D D D D D O 0 0 1 0 0 0 0 0 1 - 15 0 1 D D Figure 7. single and mul t i pievector solutions e x i s t . G = 0. "SingleD " propagation D cubes f o r UpD own Counter. The determination of inputs which propagate a D to an output of the counter can be specified by a "canonical" set of propagation D cubes such as those i n Figure 7 which specify the propagation of a single D signal to an output of the counter. 0 0 0 0 . When signals can assume the values D and D the implied signal values can be determined from the table of Figure 6 and the counter algorithms by the process of composition 1431. 1 1 0 0 D D D D D 1 . (Ê p . After y = (0 0 1 χ χ χ) and 75 are implied. G. U/D .NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS B. L) = (0 1 0 1). Composite Algorithm H/P H/P H/P H/P U/P U/P U/P U/P H/U H/U H/D H/D U/D U/D U/D U/D H/D H/D H/U H/U P/P H/H H/H U/U U/U D/D D/D L Ô Η Both U/D G A y Π y.0 B2 = {U} and from Figure 6 the signal values L = 1 . DDrive f o r the Counter This problem consists of propagating error signals D or D from the control signals. Π y. variable is implied. . completion of implication Y = (0 1 0 χ χ χ ) . ΠΒ. (If none. L) = (χ χ χ χ ) . of possible algorithms defined by the values of y .) (3) Specify necessary values to the elements in (C . Consider a 4 . in which case A = of (Y). (Y) = D(0 1 0 1 ) . BREUER Both single and multiple-vector solutions exist to l i n e j u s t i f i c a t i o n problems. the time delays in the circuit. Ρ Furthermore. L) so that P' algorithm α is realized. These tests will then be processed using a concurrent fault simulator. y = (χ χ χ 0 ) .. 1 : y = U-1. G = 0 and L = 1. Then ΒΊ = {U. C = 0. W e w i l l only i l l u s t r a t e single vector solutions.ΠΒ 2 . G. namely the use of preprocessing and high level functional models. This algorithm ties together our functional models. which.76 Line J u s t i f i c a t i o n for Counter Μ.Α. SUMMARY AND CONCLUSIONS Select α = U. We believe that the growing complexity of modern circuitry makes test generation an extremely arduous task. Ρ} and A = (χ χ 0 1) and (C . Ρ} = Β ^ Β ^ J u s t i f i c a t i o n No. To ease the difficulty of this task one must employ some means of design for testability as well as have available a powerful ATPG system. (4) (5) Example: Specify y = α" (Y) unless α = Ρ. from the Counter Algorithm for D implies y = t0 1 0 0 ) .ΠΒ 2 and of possible algorithms defined by the control input values. (2) Select a primitive algorithm α in Β. We have discussed several aspects of an advanced ATPG system which we are designing. D.b i t counter and assume X = (0 1 0 1 ) . In this paper we have first reviewed the current state of the art in ATPG systems. G. In the future we believe that test generation must proceed at still higher levels . Let B. From Figure 6. U/D. Β2 = {U. be the set Let B2 be the set Compute Β. Procedure: (Single-Vector Line J u s t i f i c a t i o n for Counter) (1) Generate Β. U/D = 1 . U/D. Normal backtrack can be used to generate other possible solutions. and the results of our rate and cost preprocessing so that test for faults can be more efficiently generated. justification is impossible. D. A and Y. if one exists. Η. define additional control inputs and values of y and A using the table of Figure 7 and the counter algorithms as specified in the following general procedure. We have not presented the actual algorithm. Aspects of the Initial design of this simulator were reported in [44]. Armstrong. C. 63-67. on Computers. B. 1 . pp." IEEE Trans." IEEE Trans. and concatenate tests in order to handle large 77 There i s also a potential application f o r generating tests f o r large regular arrays using recursive procedures. 44-54. EC-16. F. "Cause Effect Analysis f o r Multiple Fault Detection in Combinational C i r c u i t s . H. arrays of chips. Su. [61 [71 181 191 . W. A. 676-683. " I d e n t i f i c a t i o n of Multiple Stuck-Type Faults in Combinational Networks. "Automated Diagnostic Test Programs f o r Digital Networks. C-25. Sellers. P. 278-291. pp. Vol. 742-746. (August 1971)." IEEE Trans. " IEEE Trans. Bouricius and P. hence one w i l l actually be testing for overall functional performance." Computer Design. D. C-20. M. "Programmed Algorithms to Compute Tests to Detect and Distinguish Between Failures in Logic C i r c u i t s . pp." IBM Journal of Res. (January 1976). the successful u t i l i z a t i o n of LSI c i r c u i t s may be seriously impeded." IEEE Trans. J . Yan and Y. (January 1966). J ..N E W CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS of description. "An Algorithm for the Generation of Test Sets f o r Combinational Logic Networks. Vol. on Computers. EC-15. P. T. 567-580. D. W. Breuer. on Computers. Vol. G. " IEEE Trans. Vol. pp. J . pp. ACKNOWLEDGEMENT I would l i k e t o acknowledge t h e c o n t r i b u t i o n o f P r o f e s s o r A. D. " IEEE Trans. C-24. I f the problems of testing are not adequately solved. Hong. pp. C-20. (November 1971). "On Finding a Nearly Minimal Set of Fault Detection Tests f o r Combinational Logic Nets. pp. (Oct. Y. 1967). A Calculus and 121 J . (November 1971). Thomas. R. pp. on Computers. Chang and S. D. 141 151 J . Wang. S. C-17. S. S. Schneider. Tang. M. Vol. (July 1975). Vol. "An E f f i c i e n t Algorithm f o r Generating Complete Test Sets for Combinational Logic C i r c u i t s . pp. W e must learn how to e f f i c i e n t l y handle b i t sliced microprocessors. Bossen and S. on Electronic Computers. 131 J . (July 1968). "Analyzing Errors with the Boolean Difference. Friedman who worked w i t h me on much o f t h e model development f o r t h e c o u n t e r which is reported in t h i s paper. The system specification must also be used. 1252-1258. "Diagnosis of Automata Failures: a Method. Bearnson. on Computers. on Computers. No. Roth. 1245-1252. REFERENCES til F. and Dev. T. Hsiao and L. 66-73. Roth. S. such as the register transfer level and i n s t r u c t i o n set l e v e l . Vol. (July 1966). on Fault Tolerant Computing. pp. . (May 1975). " IEEE Trans. (May 1977). on Computers. C-20. " IEEE Trans. "Delay Testing LSI Logic. 1976 I n f i . C. C-20. pp. " IEEE Trans. 7th I n f i . pp. Hayes. "SCIRTSS: A Search System f o r Sequential C i r c u i t Test Sequences. 691-695." Comput. M. C-24. on Computers. J . 573-578. Symp. "Probabilistic Analysis of Random Test Generation Method f o r Irredundant Combinational Logic Networks^' IEEE Trans. J . Huang and M. M." IEEE Trans. C-23. J . (October 1974). 639-646. Agrawal. pp. P. Vol. P. Vol." J . C-27. pp. P. BREUER K. 159-164. (July 1975). Parker. (June 1978). J . Palo A l t o . Toulouse. "Analysis of Logic Circuits with Faults using Input Signal P r o b a b i l i t i e s . Losq. (February 1977). P. Conf. Vol. on Computers." 1978 I n f i . (July 1974). pp. (June 1977). Vol. D. pp. Vol. "Check Sum Methods for Test Data Compression. " Proc. on Computers. C-26. 617-620. and Elect. "Bridging and Stuck-At-Faults. R. (October 1976). pp. "Compact Testing: Testing with Compressed Data. P. Breuer. Seth. Vol. on Fault-Tolerant Computing. "Data Compression Techniques in Logic Testing: An Extension of Transition Counts. K. (May 1975).Α. on Computers. pp. 490-502. R. Design Automation and Fault Tolerant Computing. A. Mei. G. Huey. pp. pp. Fault-Tolerant Computing." IEEE Trans. (January 1978). Parker and E. "The Effects of Races. (June 1976). France. (1973). C-24. Vol. Vol. Hayes. 1364-1370. F." J . [251 [261 K. 1078-1092.78 [10] [11] [12] Μ. on Computers. Design Automation and Fault Tolerant Computing. 720-727. 168-174. J . "A Random and an Algorithmic Technique f o r Fault Detection Test Generation for Sequential C i r c u i t s . A. "A Heuristic Algorithm f o r the Testing of Asynchronous C i r c u i t s . and Delay Faults on Test Generation. C a l i f o r n i a . H. P. C. Ogus. Putzolu and J ." IEEE Trans. 36-41. [131 Π41 Π51 [161 Π71 [181 Π91 [201 [211 [22] [231 1241 J . Vol. C-23. A. 99-114. Pittsburgh. 1 . (June 1976). pp. "Efficiency of Compact Testing for Sequential Circ u i t s . 534-544. Delays. C-24. " IEEE Trans. Engng. on Computers." Proc. pp. Hewlett-Packard Company. Vol.. Y. Vol. " IEEE Trans. Agrawal and V. "Analysis of Detectability of Faults by Random Patterns in a Special Class of NAND Networks. C-25. "Generation of Optimal Transition Count Tests. Roth." IEEE Trans. 171-186." Application Note 222. Symp. 1 . (April 1977). pp. Vol. J . 1 . pp. C. Shedletsky. H i l l and B. 93-98. on Computers. (November 1971). Vol. 3-17. Breuer. "A Designer's Guide to Signature Analysis. on Computers. Hayes. "On the Probability of a Correct Output from a Combinational C i r c u i t . (June 1971). pp. on Computers. S. "Transition Count Testing of Combinational Logic C i r c u i t s . P. McCluskey. Breuer. J . Rutman. R. Symp. pp. 108-113. "Faultrack: Universal Fault Isolation Procedure f o r Digital Logic. Fluke-Trendar Div. "Referenceless Random Testing. (February 1978)." Computer. (November 1974). C a l i f o r n i a . on Computers. on Computers. P. pp." in 1974 Digest Papers. (1973). 1978 I n f i . D. Pittsburgh. Losq. Breuer and L." IEEE Trans. M. on Fault-Tolerant Computing. Friedman. Chang. "Concurrent Fault Simulation and Functional Level Modeling. Ulrich and T. G. pp. " IEEE Trans. Hlawiczka. 1976 I n f i . A. M. Kinoshita. (1974). See references in A. Freeman. Vol. "Functional Level Modeling in Test Generation. H. J . EC-11. 349-358. Vol. 464-471. 79 [29] [30] [31] [32] John P. 459-465. Fujiwara and K. A. Y. 39-44. "The Diagnosis of Asynchronous Sequential Switching Systems. Computer Science Press. (February 1975). on Electronic Computers. B. pp. 108-113. "Procedures f o r Eliminating Static and Dynamic Hazards in Test Generation." IEEE Computer Repository Paper No. D. "A Deductive Method for Simulating Faults in Logic C i r c u i t s . (April 1978). (June 1977). Abramovici. pp." Proc. Symp. Mountain View. C-23. "Fault Location in Semiconductor Random-Access Memory Units. on Fault-Tolerant Computing. Breuer and A. 128-137. France. (October 1974). Toulouse. 53-71. (August 1962)." IEEE Trans. N. Seshu and D. M. E. "Concurrent Simulation of Nearly Identical Digital Networks. Kumar. 1141-1144. S. Vol. on Computers." IEEE Trans. on Computers. C-23. D." IEEE Trans. pp. on Computers. Vol. G. C-26. 1132-1139. R. Vol. Armstrong. (April 1974). and C. 7. on Computers. C-21." IEEE Trans. R-72-187. C.N E W CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS [27] [28] J . "Comment on Procedure for Eliminating Static and Dynamic Hazards in Test Generation. "Detection of Pattern-Sensitive Faults in Random-Access Memories. Vol. Symp. Knaizuk. "Test Problems and Solutions for 4K RAMS. (1976) Diagnosis and Reliable Design of Digital Systems." B u l l e t i n 122. Schmidt. on Computers. Vol. Hartmann. A. pp." submitted to IEEE Trans." IEEE Trans. Breuer and K. Vol. Friedman. 150-157. H. Breuer and A." Proc. C-27. (June 1976). Chappell. pp. D. S." Proc. pp. C-27. (June 1978). Harrison. Elmendorf and L. 4th Design Automation Conference. 191-192. (May 1972). A. Fischer. V. [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] . "A Comparison of Parallel and Deductive Simulation Techniques. C-24. J . pp. pp. pp. 1069-1078. A. Baker. "Fault Detection Test Generation f o r Sequential Logic by Heuristic Tree Search. Semiconductor Test. pp. (November 1977). (1972). P. S r i n i . H." IEEE Trans. "Testing Logic Circuits with Compressed Data. Hayes. M. Vol. on Computers. J r . "An Optimal Algorithm for Testing Stuck-At-Faults in Random Access Memories. M. "Optimal Random Testing of Single Intermittent Failures in Combinational C i r c u i t s . Vol.Α. C-26. (July 1974). Symp. Vol. Α. Breuer. "Diagnosis of Intermittent Faults in Combinational Networks. " 1977 I n f 1. " IEEE Trans. Savir.80 [44] 45 Μ. 770-725. 180-188. "Intermittent Faults: A Model and a Detection Procedure. pp. C-23. on Computers. Koren and Z. Kohavi. C-22. 340-351. 1154-1158. J . Page. (March 1973). pp. V. "Testing f o r Intermittent Faults in Digital C i r c u i t s ." IEEE Trans. pp. pp. (November 1977). I . S. on Computers. (June 1977). Kamal and C. Vol. C a l i f o r n i a . on Computers." IEEE Trans. 46 47 . on Fault-Tolerant Computing. BREUER Μ. Los Angeles. The differences are summarized in table 1. I/O buffering characteristics as well as logic function.Χ. Clearly this leads to strongly different design objectives..Ζ MIN(package count) Abstraction Signals Objective Table 1. § 1) THE MSI PERIOD BEFORE 1975 This period is characterized by little or no commonalities between PCB (systems) and IC design. Belgium INTRODUCTION No discipline in todays technology is subjected to such a increase in complexity than that of digital electronics. layout and testing. EEC. Bruiieli S Luxembourg.10K gate equivalents) to build digital control systems and mini or maxi mainframes where high speed at lowest cost is of primary importance. packaging.100 MSI's (1. I(t) MINtPxt^xA ) d c Difference in design problems for MSI level design. This development is the basis of the low cost of computing power which in its turn is necessary to cope with the manipulation of the vast amount of data involved in the design of such systems. editoi.Ο. The problem will bè studied by looking at the growth in complexity of IC's and the corresponding growth in software tools for simulation. 1 ) . gate delay and chip size. as a result of this. The purpose of this contribution is to discuss the commonalities and differences between computer aided design (CA D) tools for PCB and IC level design. These functions are assembled by the system designer as sets of PCB's containing 10. 81 . 1979 LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD ARE REQUIREMENTS CONVERGING ? H. To ease the design of complex systems MSI IC's are characterized by a) A high degree of standardization in electrical. P. EAEC. PCB level Characteristic IC level Design problem Circuit logic & system PCB layout Testing gate and higher Ι. This is mainly the result of achieving over four orders of magnitude increase in functional density per integrated circuit during the last two decades LU . This is believed to be of interest to understand future developments of software tools for both fields and to encourage crossfertilization wherever possible. b) Lowest possible cost/function for highest possible performance in terms of delay and power dissipation. software design tools. delay. abstraction level and. During this time span IC's are limited in complexity to gate and register level (Fig. As it stands today a digital system can be considered as an interconnection on a printed circuit board (PCB) of a number of functional integrated circuits (IC's) at MSI or LSI level. De Man Katholieke Universiteit Leuven Laboratorium ESA T Heverlee. Transistor V(t).. Huigrave.. t^ and A c are respectively gate power.G. COMPÜTER-AIDEP DESIGN oi digitai ele c troni c c ir c uiti and iy&temi North-Holland Publishing Company ©ECSC. It is thereby convenient to look at the so called MSI period before 1975 followed by a study of actual trends from LSI towards VLSI.. impose a maximum capacity of ca. Notice however that in LS the link of performance to technology and layout is very weak.1. complexity Table 2. Newton Raphson Implicit Integration Double precision 30. LS is a typical PCBMSI tool. The succes of LS at PCB level is the result of the high degree of standardization of MSI functions offered by the IC manufacturer which allows for the creation of library stored "macrodescriptions" of most MSI IC's to be used directly by the designer.Χ. circuit simulation allows for very accurate ( 1%) simulation of circuit behavior as a function of technology and layout but the complexity of algebraic device models.Ο. c) Logic simulation is ca.. This limitation is fundamental and can only be lifted at the expence of loss in accuracy and for digital circuits only L ^ J (see §2 on timing simulation).. Fig. (IC) Transistor Algebraic V(t). . Characteristic Level Model Signal Technolcgy-Layout link Algorithm Ckt Sim. sim. 1. From this table and Fig.Ο. 1-1) Design verification.40 gates/ simulation. This leads to the objective of smallest chip size (lowest cost). I(t) Excellent Matrix inv. The main problem is one of circuit optimization at transistor level whereas for PCB MSI design the problem is at the logic design level as well as PCB layout and testing level. 10^ times more efficient by exploiting the high average inactivity (latency) of digital networks and by representing models as simple logic operators in the very restricted set of signals states (Ι.. layout and testing of MSI functions did not present major problems.Ζ Very weak Event scheduling Inactivity exploitation Logic operations 20. : properties of circuit simulation and logic simulation.1970) both tools could cope with the full complexity of the problems at hand in both areas. b) As wished by the IC design objective. In as far as design verification is concerned this has led to the disciplines of circuit simulation (CKTSIM) for IC design and to logic simulation (LS) in the PCB design world. Their properties are shown in table 2 and illustrated in Fig. No such severe problem existed at IC level. Layout is done manually using interactive graphics as a drawing aid .82 H. De MAN Due to the low complexity. The problem of using it to LSI IC's today and at PCBLSI level will be discussed in § 2 ) .lOK gate circuits.50 gates Logic... (PCB) Gate-FF-(functional) Logic & delay Ι. clearly shows that PCB designers have been confronted f rom the start with the testing problem of 1.Χ.OOO gates Arithmetic Max. who does not have to worry too much about the modeling problem. numerical algorithms and double precision (64 bit) arithmetic. This has resulted in the discipline of Automated JJest Pattern Generation (ATPG) initiated after Roth's introduction in 1966 of D calculus 141 or path sensitization which is still recognized as the best systematic technique available.. 1 one can notice : a) At the time of their development (1968...Ζ) R* . 20. 1-2) Testing design. synchronous pipline type arithmetic). They all make use of the high degree of packaging standardization and perform partitioning. Therefore the layout complexity is moving rapidly away from the PCB to the inside of the IC package.l shows that DPT is a must for future PCB systems design. Converging requirement : cope with increase in complexity Fig. Exclusive use of synchrononous logic b. Only this way the algorithms of ATPG can cope with the increased complexity.LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD The problem with ATPG is that CPU time (as in CKTSIM) increasesas '·■' : CPU V K(FF). This limit is also indicated in Fig. ) y (3) 1 1 i X The result is shown clearly in F i g ^ ^ b y curves (y applying a and b) and (applying a.l.l. at least until 1981.l.g. ATPG for unconstrained design can barely cope with LSI complexity and clearly will fail totally when VLSI becomes a reality around 1980. a. are an absolute must both for future systems and IC design. we can also derive the following facts : 1.l.. Fig. Figure 2 ^" curve Q shows indeed how ATPG cost for unconstrained design becomes prohibitive for complexities in excess of 5000 gate equivalents. .placement and routing. § 2 A FTER 1975 : FROM LSI TOWARDS VLSI Q) Referring back to Fig.e.b. the discipline of design and partitioning for testability(DPT)■ These techniques mainly impose : a. for IC simulation 2.5 < y < 2. however reveals the fact that the average system complexity is increasing at a much slower rate than IC complexity i. Therefore the PCB layout problem will probably not become more difficult in the future but the burden will be on IClayout as will be discussed in § 2.e.l.NY (1. Subnetwork partitioning into Ρ subnetworks using the SRL concept which reduces cost to : Ρ Ρ CPUv2 N . Therefore computer manufac turers rapidly went into the dissipline of imposing design rules to make the sys tem testable i. Fig. CKT SIM as such is totally inefficient to cope with full IC complexity. Even today. Fig. 13) PCB layout A large number of PCB layout programs have been developed. The provision to add additional (10.. The probiert Ì A even more complicated since ATPG needs to be verified by TP verifi cation d L ° J ta using logic simulation due to crude modeling and simple fault models used in ATPG algorithms.5) (2) 83 with N the number of gate equivalents and K(FF) an exponential function of the number ofJlipF_lops (or state variables) of the sequential system. Instead it seems that LS is appropriate. shows clearly the convergence between systems and IC complexity when en tering the eighties. From Fig. y « ( ¿ N . & c ) . as discussed in § 1. rather than making the system complexer it is made cheaper per function by IC technology. Design and partitioning for testability.l. shows again that ATPG at the time of its concep tion (1966) was barely capable of coping with the PCB system complexity but not at all capable as such to cope with todays complexity. This imposes the sacrifice of some of the potential complexity increase to design structuring and architecture for testability(e. 20%) logic to transform sequential into combinational logic for testing purposes by allowing all flipflops to be con nected as a scanable shiftregister (SRL) for state verification L1QJ c. let us analyse some converging and diverging requirements for the design of a PCB system and IC's. Furthermore the correspondence between intended logic and layout wiring is sometimes far from straightforward(e. Practical application of such a data base is linked to its continuous updating which is a tremendous task requiring close collaboration of research institutes and semiconductor industry. I^L) and layout and logic performance are strongly interrelated.low temperature processing. processors. merged structures..LS. b. SS software. Exactly those three fields are in full evolution since the growth in Fig. (cfr EEC study proposal on data bank)..At IC level the full spectrum from system down to circuit. 10& geometric data ! ) . in IC design the data-explosion is present at each design problem at hand.l).analog MOS..l.. there is a strong need for system simulation (SS) which allows for design verification of synchronous systems described as the interaction of subsystems(RAM. low defect oxide growth.PLA. This is clearly beyond the capability CKTSIM as shown in Fig.1 is only possible by technology improvements (smaller linewidths.84 H.ROM. dynamic RAM design). This leads to : . .g. often electrical effects which cannot be modeled in LS (e. sensing amplifiers e t c .TTL logic levels. . Diverging requirements : the technology link The biggest difference between PCB and IC design is that.g. charge transfer. the lowest level of design abstraction is extremely different. since progress is only possible by circuit technology cleverness. All these effects cause strongly divergent requirements in different fields discussed below : 1) Data base For PCB systems the design of data base at functional(logic)level is imperative in conjunction to ATPG. 10K gate (bit) level could be useful in many cases (e. double polysilicon memory cells. Practical use of such a data base is in it capability to adapt a fast technology evolution ! Whereas in PCB design the data-explosion is in the increasing number of LSI functions. 2) Simulation (design verification) The need for SS and LS both for PCB as well as IC level has been recognized above.. On the other hand Fig..l shows that most of the detailed design problems of todays PCBMSI are moved away from the PCB into the IC package and this actually causes strongly diverging requirements to PCB and IC design software. use of new device structures etc. Therefore full circuit simulation at 1.).. timing data) is considered.placement and routing of well defined functional modules with well defined external characteristics(e. . TTL + 5V supply. r i To fill this gap recently a new CAD discipline called TIMING SIMULATION L y " L -1 (TS) has appeared which can cope with dominant effects in logic MOS and I^L circuits up to ca 5K gate level(see fig. DPT. although both are now systems design. MOS sensing amplifiers etc. logic levels.g.. However. For IC design the amount of design data explodes literally downard into device and technology data ( 10 4 gates/chip g¡. transmission gates. ri r . Beyond 1980 for IC design and today already for PCB design.. Timing simulation is a relaxed form of circuit simulation for digital circuits . ) are exploited and no such standardization as at PCB level exists.g. transistor and technology level needs to be considered. Here is an important contribution of sponsored joint efforts of universities and industry seems necessary. To the authors knowledge a-lot of research is going into this field but it is not yet widely used by lack of standardization of methods and education of the users. . De MAN 3.) and new circuit techniques (dynamic storage. registers) described at functional level.At PCB level the interaction.strong layout-technology-performance link as opposed to external standardized functional modules at PCB level.decreasing on chip voltage levels. lJqj Also a tendency exists to be able to mix the design levels in so called "hybrid" simulation. .I(H) 20% Accuracy FUNCTIONAL GATE FF LOGIC DP MACROMODEL DEVICE TABLE 0 % I 1 L S HO3. io6 V L 10- SIM. From the above and the extremely high cost of redesign due to mistakes.TS and CKTSIM are possible under one memory and program manager. matrix inversion is avoided and network inactivity is exploited as in logic simulation. VECTOR 1. These can be verified by LS by expanding the subsystem into gate-FF-register-memory level. The direct link to layout and technology is of primary importance.algorithm simplification : by equation decoupling for small timesteps. in which LS. it is absolutely necessary to unify all CAD disciplines under one common description languages (CDL) and data base such that no errors are introduced when going from one level of abstraction to the next.Ο. LOGIC SIM. This in itself imposes a top-down design approach such as in structured software programming. This level can be verified for meeting the timing requirements within the systems clocking by LS and CKTSIM. the response and modeling level as well as the degree of link to layout & technology of the different simulation disciplines know today. Clearly visible is the need to use the full spectrum of design aids for IC work in contrast to PCB design. However most important is that no single discipline can cover the full spectrum since the higher the level of simulation. Fig.model simplification : device models are stored as tables or piecewice linear elements and no nonlinear iteration is used.LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD 8 5 based on the three following facts : .TYPE RESPONSE MODELING TECH. the less detailed information is obtained. As a result TS is between 10^. M ¿3 3) Layout Since the complexity of interconnections is moving from the PCB board into the LSI chip the IC layout problem becomes the most severe with very different requirements as summarized in table 4. The VLSI chip design starts by conceptualizing (testable!) interaction between synchronous subsystems '(SS). .DELAY V(H) . Timing simulation is a typical IC design tool as a result of the strong link of IC design to technology. 80 % ίο2- 1 S S ι io _i CRT SIM V(t) . ιοί 20 % TIMING SIM. I(t) 1 % Accuracy ALGEBRAIC 100 % Table 3 : summary of properties of existing design verification aids.0 TIMING 1.macromodels of logic networks are used (from gates to PLA's)..0 DELAY Ι..LAYOUT LINK s SYSTEM SIM.l and Table 3 summarize the range of applicability.Χ. 10^ times more efficient than CKTSIM yet still predicts circuit behavior as a function of layout and technology. . | 2 4 J then treats the placement of interconnection of the cells without regard to their actual content. It can be concluded that probably this is the only viable way to (V)LSl.e.3). In this way a fully automated design system can be designed for (V)LSI. 10 items to be verified again some 40 design rules ! Manual techniques are verv error prone and therefore the 1 h typical. Once these cells have been designed their geometrical. At this level similar techniques as in PCB design are used. 3 orders of magnitude expansion of geometric data/chip design i. De MAN ■*? 1 2 3 4 5 6 (V) LSI Area optimization important Strong performancelayout link Transistor level Highly technology dependent Many design rules Function identification very difficult Highly error prone PCB Less important Weak due to I/O standardization Functional level Standardized Few design rules Component library easy Less error prone 7 Table 4 : layout problems for LSI & PCB The area optimization requirement(point 1) still makes manual design predomi nant..g. logical and electrical characteristics (e.At this level all approaches can be done on a minicomputer possibly used as intelligent terminal.8 6 H. Since all data is contained in the data base also logic and timing simulation of the automated layout can be performed to verify the actual chip performance. In this polycell approach a number of'Standard" circuit functions are designed at transistor level(cell) using interactive graphics coupled to design rule ve rification . This property again indicates the need for a design automation in a structured partitioned environment. n lögn with n the number of geometric entities. . An automated layout program based on row placement and channel routing L ^ l i » ·. IC discipline of DESIGN RULE CHECKING t2"! has been created/ Again how ever : CPU /V n . On such technique is the automated layout using a cell approach H I · · · P S . macromodels) are stored in a common data base as discussed earlier. 4) Modeling Whereas for PCB design as well as VLSI design modeling at functional level is necessary .The evolution towards this controlled design method is probably comparable to the evolution of programming from machine level language towards higher level struc tured programming..and circuit simulation(activities(2)(3)(4) in Fig.3 which represents an " integrated automated design system'.there is again an additional modeling requirement for VLSI at the . In newer approaches partitioning. logic diagram. « . However when going from 1970*1985 we see ca.placement of routing of polycell subcircuits as well as manual design becomes possible. sometimes aided by symbolic design whereby use is made of a menu of predefined symbols which are translated into the different mask level geome 1 tries L ^!. If cell layout also is done to testability requirements also ATPG is possible (see fig. loading rules.'). ca. pp 46-60.July 1966 [5] G. MOORE : " Progress in Digital Integrated Electronics.CAS-22.D. commonalities and differences. .Functional modeling) .converging : .Y.. A. ULRICH et al : " Concurrent Simulation of Nearly Identical Digital Networks".January [ly W.London 1977. CHAWLA et al : " MOTIS . .H.LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD 87 bottom level i. Qj) M.spectrum of VLSI down to transistor level .STRUGE: " A Test Methodology for Large Logic Networks".June 1977 .B.Dec. See activities (10) and(ll) in Fig. [6] B0T0RFF et al : " Test Generation for Large Logic Network" Design Automation Conf.on Electron Devices. BREUER. June 1978. Vol.LS.4 illustrates the CAD disciplines for both PCB and IC design showing clearly this interrelationships. proc.Computer pp 38-49.C. 14th fj] SZYGENDA et al : " Digital Logic Simulation in a Time Based.Parallel Fault Simulation".absolute link to technology . March 1075 [9] E.1977 ¡3] M. Vol.P.An MOS Timing Simulator".CHANG et al:" Deductive Techniques for Simulating Logic Circuits".previous device models cannot cope with a number of new effects resulting from reduction to micron linewidths and modeling of technology steps is necessary to understand submicron lithography and low temperature processingBSl .pp 475-486.ED-22. IEEE Trans. Vol. Vol.diverging : . »Washington D.March 1975 ¡8) H. [4] J.both are systems design in top-down fashion (share SS. pp 52-53. technology and device level. Ltd.J.pp 278-291.3.1975.testability has to be included in the design process so that ATPG can be used .absolute requirement for integrated simulation-layout-testing CAD system Fig.Y.WILLIAMS et al : "Enhartcning Testability of Large Scale Integrated Circuits Via Test Points and Additional Logic".Y. Oct.pp39-44.IEEE IEDM Talk 1.7. Clearly the efforts to create tools for both disciplines are fairly different except at the top level.pp 103-116.August 1975 [12] B.RABBAT et al : " A Computer Modeling Approach for LSI Digital Structures" IEEE Trans.Las Vegas.1975. g 3 CONCLUSION From the above we can conclude that CAD for VLSI has a number of converging and diverging requirements.No 8.G. Comptes Rendus des Journées d'Electronique.lO. 15th Design Autom.IBM Journal of Research and Development.IEEE Transactions on Computers. Vol.No2. } G.3 .R. Ecole Polytechnique de Lausanne.A. Table Driven Environment -part 2 . ROTH : " Diagnosis of Automata Failures : A Calculus and a Method". References f j .E.proc. April 1974 [lq| M.Computer.. Dec.Conference. FRIEDMAN : " Diagnosis & Reliable Design of Digital Systems" Pitman pubi.e. pp 403-413.C-22.HSEUH et al : " New Approaches to Modeling and Electrical Simulation of LSI Logic Circuits".on Circuits & Systems. Computer. Nr.Conference. Í26J : " Proc. | l 5 J D. Proc. San Francisco.O.of 13th Design Autom. pp 399-416 1976 g2J A.88 H. See ref. ¡24¡ H. Digest of Int.A System for the Directed Automatic Design of LSI Circuits".W. DE MAN et al : " The Use of Boolean Controlled Elements for macromodeling of Digital Circuits".14th Design Autom. VAN CLEEMPUT : " An Hierarchical Language for the Structural Description of Digital Systems".Université Catholique de Louvain. pp 212-213. of NATO Course on process and Device Modeling" . ( Ì 8 J W. Vol. pp 377-385. Louvain-la-Neuve.332. June 1977 [}9j D. 15.P. May 1978 [ Ï 6 J H. Febr. June 1976. PERSKY et al : " LTX. 13th Design Autom. IEEE Journal of Solid State Circuits. De MAN [j|| S.IEEE Int'l Symp. Belgium. 13th Design Autom. june 1978 pp 326 . BOYLE : " SIMPIL : A Simulation Program for Injection Logic" See ref. FELLER : " CAD VLSI Design Techniques and Microprocessor Application".Conf. 1978 Int'l Symp.1977 . proc.on Circuits & Systems.Conference.pp 434-440. June 1977 ¡25] B. San Francisco.SC-13.SC-12. pp 1-4. 15.pp 206-212. 1977 Il4j G.15th Design Autom.proc.M.Vol..june 1978. Proc.R.pp 700-703. |2o] B.2. pp 890-896. pp 522-526 [ 1 7 J G. see ref. LINDSAY et al:" Design Rule Checking and Analysis of IC Mask Design".Solid State Corcuits Conf.GIBSON et al: " SLIC-Symbolic Layout of Integrated Circuits" . ARNOUT et al : " The use of Threshold Functions and Boolean controlled network elements for macromodeling of LSI circuits". Proc..No 3.. Journal of Solid State Circuits.Conference. FELLER : " Automatic Layout of Low Cost Quick-Turnaround Random Logic custom LSI Devices". Proc.1978. Conf.pp 281-282. pp 79-85 ¡23¡ A.T.[21]. June 1976. BEKE et al : " CALMOS Computer Aided Layout program for MOSLSI". proc. pp 301-308 ¡2^ G.FAN et al : " MOTIS-C : A New Circuit Simulator for MOS LSI Circuits". CAS.Las Vegas. PREAS et al: " Methods for hierarchical Automatic Layout of Custom LSI Circuits Masks". PEDERSON et al : " A simulation program with LSI emphasis"..July 19-29. I n v o l u t i o n of IC and syste m complexity as compare d t o comple xity handled by CAD t o o l s . \ TECHNOLOGY MO DELING DEVICE MO DELING CIRCUIT SIMULATIO N DESIGN RULE CHECKING Dl6l k.:U1SMUAT0N ||4| I —3f SYST. LOGIC VEaiFCAI. F i g .cuii LAYO UT |[2| I JESSMSy. |' -LOGICSIM.4 : Overview of CAD for sys tem and IC design and their re lationships .FA VFORTEST WEAK MASK I MAKING IESI rølERN GENERATIO N ¿ VERIFICAtlON TECHNOLOGY LINK Ι7Γ ALGORITHMIC LINK = IG.E C-€OONG1|3I IC DESlGw BU3CF.LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD 89 XXK -\p— U . ¿ Fig.TIMING SIMULATIO N -FUNCTCXALMXH— MACRO MO DELING ■ AUTOMATED LAYO UT -ATPG -DESIGN «.3 : " An integrated design system for IC's " Fig. 2 : Cost of ATPG as a function of comple xity for d i f f e r e n t d e sign s t r a t e g i e s . S o u r c e : re f 6 1ECHNOLOGYI MODELING ||I0I c ι ] „ J SYSTEM CCSCR1PTON| 111 sueicip.S Y S g M SIM ^+ L O GIC S!M W\j~ZFULL NETWORK -WITH SRL (9SÏI FULL NETWORK PARTITIONED AND SRL 2 « 6 8 10 12 1 4 IS 18 20 22 24 THOUSANDS O F GAIES Fig. 1 . DOC.CÑ~ll51 I SYSlGi LAYO UI 1IM1IC SIMULATIO N LOGIC SIMULAI OM BACKPLANEWRNG | PCB LAYO UT -SYSTEM SIM. . a real gain in overall costs and performance can only be achieved if a good choice for the hardware has been made and the digital circuits are extremely 91 . oi digital electronic circuiti and iyitemi North-Holland Publishing Company editor.6e£s S Luxembourg. with electromechanical units. COMPtfTER-AIPEO PESIGN 1979 CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS Hans Martin Lipp University of Karlsruhe/Germany ABSTRACT Technological success results in digital circuits with more functions per chip than ever before. Low priced microprocessors may suggest a common hardware approach.G. Up to now the designer has to struggle with very different kinds of hardware. But often the solutions must compete e. Another limitation should also be mentioned. EAEC. the aspect of computer aided design for such circuits is the central topic. Highly regular structures like matrices are preferred or such designs which result in random structures that are generally accepted. 8<iu4. Customerdesigned circuits are only a realistic alternative for high volume applications and they require a close interaction between customer and semiconductor manufacturer. It seems to be impossible to cover the whole range of applications with a unique digital concept. and additionally wanted features are forcing the introduction of digital circuits in nearly all areas of engineering. With regard to the topic of this symposium. Then. outlining the need for cheap but effective tools in this field. and in experimenting in a new field create a much more difficult situation in adapting to digital electronics than it is for larger companies. Their restricted abilities in manpower. which are very cheap. pneumatic or electromechanical devices. and new products seem to enlarge that problem. INTRODUCTION Digital circuits now are in widespread use not only in computer applications but also in many different areas which have been formerly dominated by mechanical. Larger companies and manufactures with a broad production line in digital systems have a lot of in-house experience. smaller volume. This paper does not deal with a detailed evaluation of existing digital circuits and their features. As an example the Karlsruhe design system LOGE will be presented as an efficient instrument which can handle different hardware concepts for control applications. As a result the term digital may be an attribute belonging to a complete unit or only to a small fraction of a whole system. Fashion influences. therefore. reduced power consumption. the need for more comfortable handling and control. but speed restrictions and high programming effort practically prevent such a general solution. higher reliability. and correspondingly trained personnel. The aim of this paper is not primarily to discuss their problems but to create an approach which better reflects the needs and possibilities of smaller companies without a broad background in digital techniques.g. EEC. and able to directly actuate power switches. There exists therefore a concrete need for powerful computer aided design methods that can deal with the complexity of modern circuits and that guarantee a high design quality. Without additional insights and tools it is nearly impossible to evaluate given choices in an efficient and competent manner. ©ECSC. investments. here will refer to that step within the development process which is concerned with the construction of a specific solution from a given set of digital operating chips and modules. Design of digital circuits. Uuigrave. On the other hand digital circuits are also in wide use in fields which deal with only a few parts per system. I. safe against power failure. this can only be achieved in the long run by using appropriate design tools. documentation and testing were neglected in this trend. ' Their actual positions depend heavily on the Formol description type of integrated Technology. The complete development process of a system can be divided into several more or less distinguishable steps which are related to specific tasks.s this paper is concerned with. but can conPeriphery ' sist of switches. and the logic design itself remained a domain of man. invariable tasks. Two different strategies now try to overcome these problems. In my opinion.92 H. drawing. e. To define more exactly the designs ste. Two basic steps.. understanding by other people. readers. wiring. Peripheral Test by units may not include manufacturer printers and card Interface. In many cases. circuit family to be ι Hordwareconcept used. Test by customer solenoids. The dividing lines between Customer the customers' and the Definition of task manufacturers' part are not precisely fixed. (Userprograms) F 1 The history of digital electronics shows that 1 the efforts to use computers and approDelivery priate programs for design. 1 in control applicaF tions. Designers acted in this field like artists. stepping motors. modelling and simulation at the gate and register level was supported by computer programs. LIPP well designed. Programs are often fixed routines ' performing operations Production for predefined. With the same speed as technological progress produces more complex circuits.M. translating of the (mostly verbal) given task into some more or less abstract description. The first one assumes that programming is more easily accomplished. Later on.g. Designations 1 Logic design like program and periphery are not Software necessarily equal to IMicroprogramsetc) those used in the con1 Physical design text of digital com"ROH-Codes Manufacturer puters . and cannot meet the needs of current developments. but on the other hand. It had been triggered Figure 1 by economical factors and the growing complexity of tasks like routing. 1 may be used. first started close to manufacture. Personal creativity and design style. special Software Customer readouts. the classical design philosophy is becoming more and more obsolete. they may have been quite effective. Production and test of system . etc. the schematic of Fig. and producing cross references. experience and time constraints produced highly differentiated results. But this does not guarantee safe and reliable operation at all. The other strategy concedes the fact that not all problems can be solved without specialized hardware. It should be restricted to performance evaluation during the design phase of a systems architecture (see e. thus avoiding new hardware designs and their problems. must be to save structure as much unnecessary effort as possible. simulation must not be used as a validation tool for logic design. necessary to generate more exact task descriptions. analytic procedures must be avoided. LOGIC DESIGN AUTOMATION Hand crafted logic designs normally are verified by modelling and simulation. 93 and better adapted to different tasks. Generalor near-by solution ly speaking. constraints especially in logic and physical design overlay a complicated net of backward directed interconnections to distant steps. Because of its inefficiency. also for experienced designers. Increasing demand for high speed computation and very large and fast direct access memories impose a high amount of additional costs to the overall development. Their basic interrelations are described by Fig. and replaced by synthesFigure 2 izing ones. because Choice of basic type Formal definition smaller companies cannot of task afford the burden of unsuccessful simulation runs. The constraint of additional costs is getting more and more important. |9|). because it is based only on experience with a design style that always starts with some 1 . Modification of structure Definition of therefore. one cannot expect a simulation run to be an adequate check. and more reliable hardware designs. But nowadays.because a very early definition would not be possible in real circumstances. The microprocessor approach is an outstanding example for that intention. 1 does not show the fact that the design process is no linear arrangement of independent steps. and the complexity of present chips and modules makes it impossible. To my opinion. Only some functional tests are within reach.CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS. The next chapters will discuss some concepts and ideas on how to overcome those difficulties related to logic design.g. Fig. which would be the best solution for a given problem. to decide. Designers often decline to take this step. Computer aided design may ease this situation a lot. '1 The main goal of CAD tools. The great number of alternative choices for implementations. and even larger microprocessor applications need additional circuitry that must be designed another way. The grown complexity of circuits together with the increasing number of parameters do not allow one to perform detailed simulations with a complete set of test patterns. The first difference to conventional designing is the introduction of a formal task definition which leads to completely determined behaviour and interface description. This may be interpreted as a software replacement of the well known hardware experiment. II. but in general an overall optimization is beyond all possibilities. before starting any concrete design step. 2. Highly sophi'' sticated algorithms and Improvement of near-by solution Choice of synthesis procedures parameter values Modification of porameter values together with restart facilities former design 1' runs are the only way to Correct optimal Synthesis Evaluation prog rams byde signer achieve this goal. Hardware description languages and computer aided design are then the central aids. this is no substantiated argument. In contrast. many CAD programs f a i l e d because they used well-known but inadequate methods (see Fig. Wasting of computer time then is minimized. Instead of generating and implementing hardware d e t a i l s . Design tools of the described type may help the designer to overcome some of the unsolved problems. In the past. but documentation is mostly done afterwards. Up to now. the design description must be well readable. Introducing formal task specification primarily s h i f t s a c t i v i t i e s from a late step in design to a very early one without additional costs. see | 8 | . 9). As a r e s u l t . Poorly designed CAD systems react to these events only by stopping without a r e s u l t . and not by simulation of every single case. the designer has to change somewhat his interests. enlarging the number of parameters or their values implies a steep increase in storage and computing time. and complete. Sophisticated design concepts ( e . I f storage overflow or timeout are l i k e l y to occur in the midst of a step. designers assume that piecewise o p t i mization may end in a solution near to the ultimate goal . Switching to other products is nearly impossible. and are therefore not generally applicable. . Validation of this heuristic attack is s t i l l open. A l l design steps concentrate on solving a small fraction of the whole design problem. LIPP hardware design steps to realize some subtask. acceptable to most designers. 9). which i t s e l f is unknown. g .94 H. he should concentrate on evaluating alternate solutions with regard to influences that cannot be handled by computers. I t seems to be impossible to perform an overall optimization. Data and schematics generated by the CAD system should be used as main source and reference for test generation and maintenance support. self-contained. Thus. Theref o r e . ¡10]) require at least at the moment well-educated users. 3) Logic design problems normally are related to very bad growth functions. the program i t s e l f has to switch to a d i f f e r e n t goal. producing not the optimal but a suboptimal solution within the given l i m i t s . Then the designer has to decide whether he would be s a t i s f i e d with the result or not. The overall result is a time consuming and expensive simulation or hardware debugging. evaluation of cost and also of quality of a design can only be done by judging the f i n a l r e s u l t . he must be able to restart the calculations with the former result as a s t a r t i n g point. CAD systems should be based on product-independent algorithms. In a d d i t i o n . For most design tasks an upper l i m i t for essential parameters is not known or not r e a l i s t i c . The most essential feature of the proposed concept l i e s in the fact that a l l results of the synthesis procedure are validated by proof of the algorithm used. storage overflow or timeout may occur during a design calculation. the less is the chance for the customer to adapt to upcoming better devices. Theref o r e . The more expensive CAD systems are. 4) In some cases CAD tools are available for modern d i g i t a l c i r c u i t s . The challenge of this approach may be seen in four d i f f e r e n t areas: 1) The 'language' used to generate the formal description of a given task must be simple. Personalization to specific products then w i l l affect only small portions of the design programs (see F i g . But some implementation studies indicate that optimizing one step may s i g n i f i c a n t l y increase the number of other design steps. But often they are r e s t r i c t e d to a single c i r c u i t family. Thus. More parts then are added as necessary.M. 2) Synthesis algorithms and programs should be able to handle at least contemporary parameter values and module sizes. Modification to new devices must be possible by only changing minor parts of the procedures. based on more or less familiar elements. and easy to implement. More e f f i c i e n t systems must control internal calculations with regard to those e f f e c t s . I f a more elaborated result is neccessary. The applied optimization procedures are only valid within the limited context of the subtask of this step. . and therefore with small and less experienced design staffs. The level on which the design can be personalized depends strongly on the desired speed. Today's and even more tomorrow's gain in using LSI and VLSI devices lies in adapting tasks to circuits and not vice versa. The price that must be paid is not as high as it was with SSI and MSI circuits. III. Many of the former difficulties with hardware are only shifted to software. flexibility is constructed within the time domain by'extremely serializing data processing. CAD is available in some cases. One of it uses the generally accepted random structure of digital computers. Solutions are embedded into more general circuits which must show a higher complexity than necessary for a special application. but smaller companies with only a small activity in digital circuits. often also called 'standard logic array' (SLA) or 'master slice gate array' (MSGA). depending on the type of hardware. But the goal to increase the overall quality of digital design will be more important in the future than creating own and often too tricky solutions. It may be expressed in the percentage of unused parts or instructions. production volume. Prefabricated and tested arrays of basic circuits (often only identical gates) are connected by one or more layers of metal that represent the specific solutions (see |2|). logic. Personalization is done by the manufacturer or by the user through connecting or disconnecting basic circuits and pins. address. Design flexibility may be created in two different ways as has already been mentioned. Regular arrangements of basic circuits or software-oriented solutions form the main stream of available or announced modules. and available design support. Table 1 shows some properties of the so-called 'uncommitted logic array' (ULA). and may generate very fast circuits. and branching operations. In all cases (customer designs are not discussed here) the adaptation of the neutral devices to a specific task will be done by the user by personalization techniques like fuse-blowing or arranging statements in a certain sequence. better adapted designs must be looked for. high volume data processing units. The specific solution then exists in a sequence of instruction which may perform arithmetic. That always means wasting a certain amount of possibilities on the chip. the current trend yields new circuits with more functions per chip then ever before. and he would like to be able to perform as creatively as before. One of the drawbacks is a relatively low performance. This may be the logic elements of the future for high speed.CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS which implies a complete design. CAD always means some standardization in structures and circuits. if this reduces clarity of design and increases problems for testing. Because of this. Designers have to accept this tendency. transfer. Using improved design instruments may help in getting this evaluation knowledge more effectively with short turnaround times. up to about fifty percent. And only with this postulate being realized a major breakthrough in computer aided logic design and testing will be possible. Another one must be seen in the fact that an overlay of a specific structure (both in space and time) on the given task unnecessarily complicates design and understanding. Special conditions may still tolerate this design philosophy. Roughly spoken. low cost applications. must realize a different way in design. the designer may feel that his freedom has been reduced. and should not try as before to make use of all elements. Only in a small number of high volume. 95 The substitution of the hand-crafted logic by (semi-)automatically created designs has another effect on the designer. MODERN DESIGN PHILOSOPHY Nearly independent of a specific technology. The other way realizes flexibility mainly in the hardware realm. and internal feedback lines to form sequential c i r c u i t s of the Moore or Mealy type. they contain additional elements that are necessary for the personaliza tion process.M. Burning off NiCrfuses or inducing migration in pnjunctions chan ges the interconnection of the chip to the desired configuration.random access memory (PROM) 1st logic 1.multiplexer (PMUX) P. In addition to the immediately useful logic. Table 2 indicates that in principle. OR. despite the fact that i t is related to hardware. CAD less tnan 1000 m^i metal ! ns or less several watts * ° " * ~ times per chip B V . Field programmable logic c i r c u i t s Module type P. LIPP ULA. c i r c . PLS) Ρ array logic (PAL) P. c i r c . Table 1 Table 2 refers to a d i f f e r e n t class of prefabricated c i r c u i t s that are completely finished including wiring and packaging. CMOS. AND 2nd logic 1. Peclal comb.96 H. and sequ. or a t h i r d level may introduce preprocessing of two input variables or internal X O R functions. comb._„ " s „ .129l§ior. OR c i r c u i t type Application fixed variable comb.where a second level only can be achieved by wired AND resp. Some of the c i r c u i t s also contain f l i p f l o p s for state and output variables. and sequ. LJUJ Number of gates Wiring Speed of gate Power Cons. · clrc . special variable not available comb. SLA. with some exceptions. 9htly restricted slightly restricted variable routing and selection special mapping comb . special Table 2 . c i r c . .complex functions. . MSGA Modules Technology T ^ ' M O V Uní I¿L MOS.ROM patch (PRP) ÍPGA1 Γ y variable variable «"lb. general sli «. c i r c . ci re. these c i r c u i t s have two logic l e v e l s . i f a l l address variables are fed to the decoders.y (PLA. This process sometimes is also called programming. Random access memories are the only ones which can map a r b i t r a ry. The logic power is r e s t r i c t e d to functions consisting of a small number of terms. What we r e a l l y need is a better insight into the overall design process. show additional features. 6 ) : 1) Hardware implementation. Some of the related topics may be b r i e f l y discussed. Solutions are based on gates. Read/Write) are specific instances of such arrays. and comparable elements. Some designers therefore also use the term microcontrollers for the sequential type. r e g i s t e r s . b i t scuce processors are available. 3) Software implementation. For higher performance. pins etc. Their structure immediately represents the structure of the problem. b i t s . Yet a positive aspect of this close neighborhood is the f a c t that software designers may think in terms of programming. The essential feature is random access to the array contents. PAL. but on the other hand are very similar to microprocratrarring. and better c r i t e r i a are necessary. Firmware realizations on one hand may be seen as specially arranged gates and f l i p f l o p s . and signal l i n e s . Wordlength. Experience shows that they are as easy to adjust to design changes as microprocessor programs but better to design. especially cost/performance trade-offs and f a c i l i t y of modifying already created designs are often out of focus. Units may be faster and more complex. and with influences on chip layout and board layout. I estimate that the p o s s i b i l i t i e s f o r fundamentally new devices are very l i m i t e d . Programmable c i r c u i t s are especially suited for system development and small volume applications with high speed operation. but at the moment. branching and optimal input s e r i a l i z a t i o n are much more essential than with computer applicat i o n . Microprocessors and related elements are the main building blocks. at least three d i f f e r e n t hardware concepts w i l l be important in the foreseeable future (see also Fig. and CAD systems that cover most aspects of d i g i t a l design in an e f f e c t i v e and realiable manner. f l i p f l o p s . but the inherent logic problems and structures w i l l not d i f f e r widely. evalution is based on some questionable parameters. PROM. while hardware designers s t i l l may think of gates. They are even superior in b i t handling operations. f l i p f l o p s . processing time. Solutions are mainly based on arrays with the addition of counters. But i t should be mentioned that there is a great difference in the optimization goals. but i n f e r i o r in numerical surroundings. There w i l l be no single standard f o r measuring the overall quality of a design. P M U X may be included within this class. Logic design should deal more with concepts that guarantee more e f f e c t i v e tests with less c a l c u l a t i o n . 2) Firmware implementation. multiplexers. With controllers of a general kind. r e l i a b i l i t y may increase and prices decrease. Design evaluation. and decoders. minimization of storage u n i t s . ULA. and e l e c t r i c a l parameters are better adjustable than with most microprocessor f a m i l i e s . IV. Because none on the discussed approaches can cover a l l upcoming tasks with the same e f f i c i e n c y . PGA. . FUTURE ASPECTS OF DIGITAL DESIGN At the moment i t is hard to decide whether there w i l l be a new hardware breakthrough or not. but a l l with higher complexity. Comparisons between d i f f e r e n t logic designs are currently performed by counting gates. some of them more specialized than others. There appears to be a trend to overcome some of the d i f f i c u l t i e s which are related to other hardware structures l i k e ULA and microprocessors. PLA and random access memories (ePROM. I t i s l i k e l y that this w i l l bring hardware and software engineering closer together.CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS 97 There is a growing number of such devices. Other aspects. interface electronics is more expensive than the digital core itself. the terms CBPD Η η CBPD » 1 of which are compatible with [BIO the CBPD. Result validation by other means than . Consistent data checking and transfer. To take real advantage of semiconductor devices in general control applications for small and cheap systems. It CALD primarily supports the computer aided logic design (CALD) and CAES the computer aided evaluation / and selection (CAES) in finding * the optimal solution out of CAPO. and consultant service to customers ought to be carefully provided for more pretentious applications. The larger nu lier of smaller companies must be supported in a different way. this would be an significant improvement for modelling. simulation and evaluation. complete documentation of all design steps. if not necessary. may be a possible solution. More steps of the design process then may be connected to form a real CAD system. to get together with the products. system maintenance. With the trend to replace (electro-)mechanical devices by semiconductor elements. CAIG. Access to CAD systems Complex and effective CAD tools require a considerable amount of scientific a nd programming effort and must be run at least on powerful minicomputers. This may be translated and used for computer based performance evaluation (CBPE). Additional steps like computer aided SOLUTION physical design (CAPD). CAM several choices. it would be very convenient. power failure compatibility are only some aspects. Only large companies a re able to raise the necessary investments for own installations. possibly closely related to universities and research laboratories. | 1 | ) . DESCR CHOICE OF HARDWARE CONCEPIS The final structure of the system to be realized then is \ 1 CBPE contained in a computer based PRODUCT # n PRODUCI# 1 task description. interface problems between electronics and the environment are getting harder than before. First results with silicon devices that act as transducers from mechanical parameters into electrical ones. the higher the chance that not all essential information is provided by these descriptions. CAD may also support interface design of this kind. Fig. The more complex the circuits. With further progress in this field. transducers from and to nonelectrical parts of a system must be realized in the same technology.98 H. and a high design quality would be supported by this scheme. With the growing use of hardware description languages by customers. It is somewhat curious that in the age of computers. The CBTD should 1 ? serve as a reference for all * i consecutive design steps. listings and pulse diagrams. 3 gives an impression how this would look like. computer based product descriptions (CBPD). control of power devices. EMC. Based on a common language and some standards. LIPP Product description. Servicecenters. In many cases. Basic decisions are assigned to the designer. Up to now the term 'integrated circuit' refers to digital or analog circuits on the same chip. Modern circuits still being described by manufacturers in the same way they have used for much smaller units: text. Ψ ψ 1 ψ ^Γ i ψ Integrated circuits. who also produces a formal description of the TASK DEFINITIO Ν task. Doc umentation.M. have been reported (e. computer aided test generation Figure 3 (CATG) and manufacture (CAM) complete this ideal concept. products are not described by the manufacturers by means of computer based information. But large CAD systems are quite different from commercial s ervice programs.g. rough operation conditions. It is currently under construction at the Institut fuer Nachrichten verarbeitung at the University of Karlsruhe/Germany*. easy to handle for both software and hardware designers. the design system LOGE will be described in a short sketch. agency is the Kernforschungszentrum Karlsruhe .* " " . Figure 4 Processing step ■ State .I 111 1 F r i +J&L+ fy¡H ^ L _ C o lc k " Next slate * ^ ¿ ^ ! nr -—. 4 ) . confidential handling. we first concentrated our work on the logic design process for a restricted but highly in teresting class of problems. To PROCESSING U N I I define the task more precisely. IDÄTÄ r i s p MATERIAL we refer to a well known schema FLOW REPRESENTATION. Therefore questions of reliability.. Switching theory pro vides some useful tools to start with. CONTROL UNIT which represents the time rela IPROC_SEQLIENCE REPRI tionships. The whole unit may be described as a Mealy or Moore automaton (see e. and a control unit. All tasks are divided into a -CONTROL SIGNALS processing unit which maps the CONDITION material or data flow in ^CONTROL SIGNAIS .. no instruction counter is avai lable as it is with a normal computer. and liability must be discussed in detail. Each processing step con sists of a triplet of different symbols for branching.' MA1IBIAI MATERIAL tic for processing (see Fig." " . 5 ) . Reproducibility of results due to a maintained system. Even second sourcing of a CAD system may be of interest. | 4 | ) . easy to vali date and to transform into com puter input. ITROCESSÍÑG ÜNÏÏI ■—ri jr RECURSIVE PAR1UIONIN ι 1 I SPECIAL CONTROL SIGNALS ι (. gov. INTERFACE "space". -— .» *— x Conditions Condit essential essent for branching ΓΟΟ I 11 . DESIGN SYSTEM LOGE As an example of an existing CAD system for logic design which meets most of the discussed criteria.g. also must be secured. state transitions and outputting (see Fig.n^ Figure 5 *The program development is being supported under contracts LIP/100102 by the Federal Republic of Germany. Experience of many years has shown its usefulness for digital design. A specific type of flow diagram is used for the formal descrip tion.. We prefer this kind of representation because it is completely independent of any specific implementation. error. V. becau se with control applications.CURRENT TRENDS IN THE DESIGN OF DIGITA L CIRCUITS 99 the CAD tool may be impossible. To get satisfactory results. Γ e g overflow. The difference to the usual flow diagram lies in the state transition element. reedy SlNCHRONISAIION INTERFACE AT THE LOWEST LEVEL OE PAR1ITI0NING THE CONTROL UNIT MAI B E REALIZED B V HARDWARE OR B( SOFTWARE Control sequences may be de scribed as mappings from one bit vector (condition signals) cnto an other one (control sig nals). LOGE-MIR is currently under investigation. Firmware concepts Figure 6 are more versatile than other ones. Figure 8 . lems that microprocessors cannot Documentation cope with. Possibility to embed a task into a predefined structure (essential for task modifications). LIPP LOGE consists of three main modules. but Fig. independent of each other.Implementation Software . Impending storage overflow reduces search space.100 H. (For more details see |3|).Implementation Firmware . Fig. Possibility to predefine a maximal computing time with the guarantee to receive a solution within that time. 7 and 8 may give an impression of the hardware structures used for the firmware approach. The number of inputs and outputs together must be less or equal to twice the word length of the computer used. Decoded output Clocfc LHIIE <> Clock Mosk code T Output cade Stale code ( F l PROM Special emphasis has been given to ensure the following general properties of the system: Checks of computer input for consistency and completeness. and simple controllers for probinterconnecting nets. List ol nets List of chips and Programming specs. because implementation Hordware . but does not cause stop without result. The embedded algorithms are highly effective and allow large problems to be solved within Output seconds or minutes of computer time. This decision has since been supported by the fact that programmable devices of Evaluation ot results by designer this kind are now available as LSI modules. The whole system cannot be described here.M. The number of different states is also adapted Input Masked to the size of the computer used variables input and must be less or equal to 256. The modules contain about 10 000 of FORTRAN IV statements each. PDP 11/40 computer system and are well documented. 6 shows the different types of implementation with the main output of each module. but using the same task description. ond net variables. Recently we have mainly concentrated on this type. Documentation Documentation Programming specs.Implementation studies had shown that there Result ot run Result ot run Result ot run would be a large demand for fast List ol chips. variables Both LOGE-SSW and LOGE-ΜΑΤ have been thoroughly tested on a Figure 7 UNIVAC 1108 resp. . A special benefit is the generation of a complete documentation of all design steps. In addition. and to create a CAD concept that meets the demands of modern design. Faults in the final solution have been due only to faulty circuits. wrong wiring and incorrect definition of the task itself.CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS 101 MODERN DESIGN PHILOSOPHY PRODUCT FAMILIES COMPUTER TYPE Fig. Our experience has proven that fast and efficient logic design tools need a profound background of scientific work to be successful. 9 Applications-of LOGE to industrial design problems have shown that turnaround times for design and testing are significantly lower than with conventional designs. We now try to enlarge the scope of our work to cover other aspects of design. the problem of comfortably handling such a system may lead to the same amount of effort as the design of algorithms itself. Grass: Steuerwerke . Proceedings of the 15th Design Automation Conference. W. Workshop proceedings. . Proceedings of the second symposium of EUROMICRO. Beschreibung und Synthese. 3| A. Merz: Schaltungssynthese und Prüfung in der digitalen Elektrotechnik. Tagungsbericht zum 8. 83 und 84.102 IV. Electronics.M. Frankfurt/Main.: A technology-relative computer-aided design system: Abstract representations. 5| W. Universität Karlsruhe/Germany. pressure and motion. Zentral verband der elektrotechnischen Industrie. June 1978. G. pp. Proceedings of ESSCIRC 1978. Delft. 3. 6| H. 1977. Grass: Rechnerunterstützter Entwurf digitaler Steuerungen ausgehend von einer realisierungsunabhängigen Beschreibung. 9| H. pp. 1978. April 27 (1978) pp. Internationalen Kongress Mikroelektronik. München. BIBLIOGRAPHY H. pp. 4| W. Elektronische Rechenanlagen 20 (1978) vol. 57-64. 123-134. 49-64. Arnold: Gate arrays have marketers raring to go.Entwurf von Schaltwerken mit Festwertspeichern. Snow. Las Vegas. Springer-Verlag. Lipp. Fakultät für Informatik. Angell: Micromachined silicon transducers for measuring force. 7| H. 2| W. LIPP 1| J. Habilitationsschrift.M.F. 220-226.B. et al. and design tradeoffs. Beri i n/Hei del berg/New York.A. Woitkowiak: Register-Transfer-Abläufe auf Netzen. 71-93. 2. 1978. transformations.M. 1978. 10[ H. 8| E. Venice (1976) pp. 57-64 and vol. Nachrichtentechnische Fachberichte 49 (1974) pp. Lipp: Array Logic. Weber: Ein Programmsystem zur Unterstützung der Rechnerentwicklung. Ditzinger. Grass: Zur Minimierung des Multiplexeraufwands bei Mikroprogrammsteuerwerken. pp. COMPUTER-AirÆP DESIGN oi digital electronic circuiti and iyitemi North-Holland Publishing Company © ECSC. and actual status quo of the other companies are confidential and remains vague. test pattern generation of computer.Ltd).Japan. M. For test pattern generation of large computer. In Section 2. especially in the f i e l d of Computer. P W B layout of ESS and c i r c u i t analysis of LSI are described in d e t a i l .Japan. As an Example.. F i r s t . The IC Division uses another NEC ACOS/700. TOKYO. l . 1. Communication Systems.. which has been developed in the NEC Computer Engineering Division. In Section 4. A..).Japan. three major examples of NEC's CAD systems are described. is overviewed. Nippon Electric Co.KANI is with the IC Division.Kawasaki..Ltd) and Toshiba(Tokyo Shibaura E l e c t r i c Co. The above mentioned three NEC Divisions use several application programs . however. which has been developed in the NEC IC Division is overviewed.in common through the remote termianls of NEC ACOS/700 as shown in F i g . 1 0 3 . i t s effectiveness is also summarized.Ltd. Tokyo. JAPAN This paper is a b r i e f survey of the CAD a c t i v i t i e s in the Japanese electronics industry. is overviewed. Fuchu.. In Section 3. As an example of the important LSI CAD program how c i r c u i t analysis programs have been u t i l i z e d is also summarized. 1979 CAD IN THE JAPANESE ELECTRONICS INDUSTRY Kenji KANI. which has been developed in the N E C Switching Engineering D i v i s i o n .Nippon E l e c t r i c Co.YAMADA is with the Computer Engineering D i v i s i o n . Maigraue. Consumer Products and Integrated C i r c u i t s . This computer is maintained by the SCC(Scientific Computing Center) located at the Central Research Laboratories. are overviewed. which is maintained by the SCC branch located at the Tamagawa plant. K. NEC(Nippon E l e c t r i c Co.. an LSI CAD system.Ltd. ESS(Electronic Switching System) and LSI(Large Scale Integrated C i r c u i t s ) . Nippon Electric Co. notable CAD features and a c t i v i t i e s of Japanese electronics companies are described. ESS and LSI are picked up because advanced CAD technologies of d i g i t a l electronic c i r c u i t s and systems can be found there. an ESS CAD system. a computer CAD system.. In Section 5. because these companies are competitors in the f i e l d of Computer. large ESS (except Mitsubishi and Toshiba). Akihiko YAMADA and Masanori TERAMOTO Nippon Electric Co. EAEC. the usefulness of Scan Path approach has been recognized in NEC.Ltd.. The Computer Engineering Division and the Switching Engineering Division have t h e i r own large computers i n d i v i d u a l ly. Computer. Bruiieli S Luxembourg. Among so many f i e l d s of electronics.G. Among various CAD a c t i v i t i e s . Second. the performance of the PWB(Printed Wiring Board) layout program is described in d e t a i l .. notable CAD features and a c t i v i t i e s of Japanese electronics companies are presented. Mitsubishi(Mitsubishi Electric Corp. In this paper.Ltd.Ltd). EEC. Therefore.INTRODUCTION Major Japanese electronics companies which have advanced CAD technologies are F u j i t s u ( F u j i t s u Co. Okifoki E l e c t r i c Industry Co.Ltd).TERAMOTO is with the Switching Engineering Division.Ltd). editar. Hitachi(Hitachi Co. NEC's C A D systems. Therefore. As an example of the latest C A D system for computers in Japan.1 0 4 K. Each level of the data base corresponds to physical l e v e l .: zf SCC Branch 4- 2400 \9600 bps TAMAGAWA' PLANT NEC ACOS/700 (Dual CPU) 2. LSI package. A. and many application subsystems are connected to the data base through a data base management subsystem (DBM). YAMADA. M. •REMOTE TERMINALS TAMAGAWA PLANT (KAWASAKI) IC DIVISION •STAND ALONE SYSTEMS •REMOTE TERMINALS) /TSS ν ( REMOTE BATCH) v GRAPHIC / CENTRAL RESEARCH\LAB 0RATORIES (KAWASAKI) SCIENTIFIC COMPUTING CENTER (SCC) Λ L |NEC AC0S/7Õ0 X 7 \ / Π. •REMOTE TERMINALS MITA PLANT (TOKYO) SWITCHING ENGINEERING DIVISION •NEC 2200/500 etc. The data base consists of D esign Master File(D MF) and Component Master File(CMF). and they completed t o t a l systems for computer design support. most C A D systems have been enhanced or reorganized to meet the requirements of the new technology. such as chip. I t has a centralized data base f o r hardware design support. TERAMOTO Figure 1 Computing environment of the NEC Computer Engineering. . In the I960's. D M F and C M F have the same f i l e configuration. KANI. With the advent of large scale integrated c i r c u i t s ( L S I s ) . logic card. Switching Engineering and IC Divisions FUCHU PLANT (FUCHU) COMPUTER ENGINEERING DIVISION ■NEC ACOS/800 etc.2. many computer manufacturers started preparing CAD system to develop the t h i r d generation computers with i n t e grated c i r c u i t s . powerful and sophistica ted CAD capability has become essential to develop high performance machines with LSIs. the system of NEC is shown in Fig. backboard. mini computers. This system has been used to develop NEC ACOS series system 200 to 900 (roughly corresponding to IBM370/115 to 3033). NEC's CAD SYSTEM F O RC O M P U T E R CAD System Configuration & Function CAD programs for computers were f i r s t developed in the late 1950's in Japan to design large transistorized machines. Individual designed data are stored in D M F and l i b r a r y data for common use l i k e chip data are stored in CMF. o f f i c e computers and D IPS(D endenkosha Information Processing System). The data base has hierarchical configuration. CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 2. LSI chips and LSI package for NEC ACOS 800 & 900 . CAD System for Computers 105 ( Firmware sl· Des i gn J j ( Hardware j Design J LOGIC SIMULATOR PHYSICAL DESIGN SUPPORT •LSI package •Logic card •Back board FIRMWARE DESIGN SUPPORT DATA BASE MANAGEMENT <=? FIRMWARE SYSTEM GENERATION TEST GENERATOR LOGIC DIAGRAM GENERATOR POST PROCESSOR Firmware for Shipment D i g i t a l Data for Production ν Figure 3. Fig. KANI. Sequential circuits with Scan Path can be treated as combinational circuits. the problem of testing logic cards or LSI packages has become increasingly difficult. Ui Ρ positive clock pulse.106 K. When the efficiency of fault detection decreases. System generation capability is also supported to get the firmware corresponding to a customer system configuration. state. D logical "1" in a fault-free circuit but logical "0" in a fault circuit. M. Test generation concept is based on extended D-algorithm.3 shows the photograph of LSI chips on a film carrier and a LSI package. (2) (3) (4) . These chips are packaged in a high density LSI package (max. D logical "0" in a fault-free circuit but logical "1" in a fault circuit. max. The former is effective in early stage of fault detection.7ns/gate. υ unknown state to be easily set to logical "0". It can treat various flip flops and functional elements as primitive elements. YAMADA. The combination of two generation methods can produce test sequences with high fault coverage in rather short period of time. Random number test generation and extended D-algorithm test generation can operate successively. 7 pico joule/gate. 200 gates/ chip). the largest models of NEC ACOS series. Therefore. System 800 and 900. and Content Addressable Memory (CAM). Random Access Memory (RAM). Many test generation systems have been developed in Japan[4]. 3.500 gates/package). The functional elements include Read Only Memory (ROM). Wiring design for ceramic substrates of the packages is automated almost 100% by an automatic router.4. max. An efficient solution of this problem requires much effort in both test generation technique and easily testable design. u o unknown unknown state to be easily set to logical "1" state. the system can efficiently generate test patterns for logic circuits including these memory elements. The following ten logical values are used to represent the state of each element in circuits for high speed processing. Automatic Test Generation With advent of LSIs. It can treat up to 3. The firmware design support[32] of the system has a general purpose microprogram assembler and an automatic flowcharter. The DBM subsystem used here was developed for CAD purpose. 0 logical "0". A. the generation mode is switched to the latter. The system configuration of this automatic test generation system is shown in Fig. Sequential circuits are transformed to iterative model after feed back loops and flip-flop output connections are cut automatically. Main features of this system are as follows: (1) Easily applicable to both combinational and sequential (synchronous and asynchronous) circuits. 1 logical "1". Χ either logical "0" or logical "1" (don't care). use low level CML (Current Mode Logic) LSI chips (0. N negative clock pulse. TERAMOTO and unit.000 gate sequential circuit by using extended D-algorithm[33]. 110 chips/ package. The following is the latest example of automatic test generation systems developed by NEC[33]. 38sec. Path 12min. 1 8 X4 2 3 4 5 6 790 880 827 994 1263 21 9 20 16 24 16 b i t C A M X2 2148 2588 2119 2862 3622 95. 13min.CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 4 Total configuration of the automatic test generation system Physical ft Logical Design Information 107 Preprocessor Test Generator Q Fault Simulator Postprocessor Table 1 Some automatic test generation results No. 1 Omin. of Fault No.6 96.26sec. of Elements /IMIPS \ .l0sec. Scan Path Scan 5min.27sec.4 99.02sec. NOTE C i r c u i t Gate Flip Functional Faults Coverete Test Flop Pattern ^COMPUTER/ (ï) 64 b i t R A M 503 1551 96.9 91.2 97.9 150 121 103 127 87 4min. of CPU TIME No.3 86 8min. .50sec. A. . KANI. TERAMOTO (5) It can provide test sequences for both input/output connector pin access mode and all IC (Integrated Circuit) pin access mode of ATE (Automatic Test Equipment). The effectiveness of Scan Path in automatic test generation was evaluated by using this test generation system. YAMADA. the test genera tion efficiency for these circuits is very high as shown in the example of circuit 2 and 3 in Table 1.TP-FIOP c -LV (α) MASTER/SLAVE SWITCH INPUT . and fault coverage is 90% to 100%. Some application results of this test generator are shown in Table 1. Therefore. the test sequences with high fault coverage can easily be obtained.108 K. Circuits including RAMs or CAMs are processed in reasonable time as shown in Table 1 (Circuit 1 & 4 ) . The improvement ratio of test generation by using Scan Path is summarized as follows: Figure 5 Implementation technique of the Scan Path MÃÍTEJT — Tf FURflOPt 0 SLAVE R. Sequential circuits with Scan Path can be converted to combinational circuits during test generation and testing. the number of test sequence is 100 to 160. Test generation time for an average 1. B y using the latter mode. as flip flops in a circuit can operate as a shift register by the aid of Scan Path and contents of flip flops can be accessed externally. M.000 element circuit is about 5 to 3 minutes by 1 MIPS computer. O-TYPE FLIP-FLOP WITH INPUTS | (b) IMPLEMENTATION OF THE SCAN-PATH . because all IC pins can be used as test points. and designed data input programs (PDI. DI). From 1975 to 1977. automatic system level test generation for large computer systems can also be realized. In 1970. an automatic flowcharter and a ROM bit editor. easily testable design considerations such as Scan Path will become more and more important to realize efficient test generation for large digital circuits with LSIs or VLSIs. Because of hardware requirements and engineering changes. called DIO System.000 gates or more is reported in reference[34]. which consists of an assembler. These systems have been used for various types of ESS products. These programs are the same as those of Computer Engineering Division[32]. (3) fault coverage is the same or better. The main features of each subsystem are as follows. DIMS. The hardware technologies employed here were TTL and discrete wiring. Additional logic for Scan Path configuration is just two pins and few gates as shown in Fig. the research and development of large size ESS began at Electrical Communication Laboratories (ECL) of NTTPC (Nippon Telegraph and Telephone Public Corporation) in cooperation with NEC. By using Scan Path technique and partitioning technique. PDI.6 and 7. The DBM is specially designed for CAD to get better file handling efficiency. So.5. NTTPC has an influence on the configuration of the ESS CAD systems in Japan[12]. MDS: A firmware design support subsystem. DI: A data base management program (DBM) for design data base (DIMS). and the other three manufacturers.CAD IN THE JAPANESE ELECTRONICS INDUSTRY 109 (1) test generation time is 1/2 . and has been widely used.1/4. FDA: A functional simulation program with high level hardware description language. It uses compiling method. the Switching Engineering Division of NEC began to develop its own CAD system and integrate it with Computer Aided Manufacturing (CAM) and Computer Aided Testing (CAT) systems.8. whose format is standardized and maintained by the committee members from the manufacturers and NTTPC. unit delay and 2 values. (2) the number of test vectors is 2/3 . The simulator is effectively utilized for verification of hardware design and for debugging of microprograms and test programs (TP) [29]. 3. LSS: A gate level logic simulator for large circuits. NEC's CAD SYSTEM FOR ESS Outline In 1964. The application result in NEC on large commercial computer systems with 100. A typical ESS hardware design process using NEC's CAD system is shown in Fig. respectively. the system is physical design oriented and intended to be generic. A new sophisticated CAD system was developed for these technologies in the same cooperative project described above. Besides automatic test generation technique itself. IC or package level inputs can be transformed to gate level if necessary. the designed results from the CAD systems have been transferred to the manufacturers in magnetic tape. Main frames of DIO (dual CP frames and a memory frame) and its packages are shown in Fig. and the CAD system was designed particularly for these technologies. The result in detail is reported in the reference[5].1/3. the central control of the DIO System was improved by the use of CML (MSI and LSI) and Back Wiring Board (BWB) technologies. In both cases. The first ESS. . began its operation in Tokyo in 1971. The details will be WDS. YAMADA. advanced interactive capabilities. Both COM and printer outputs are used for documentation. PASS: A printed wiring board design subsystem. Its outputs are various documentations and NC tapes for manufacturing and testing. coloring. The above mentioned CAD system consists of about 360. FUA: A test generation subsystem for packages. IDS. etc. . APK. and twisted pair assignment. APK: A subsystem for designing backpanel wiring.. are expected. Α. flexibilities for rapid changing ESS technologies. is used. FDA. the same program as PWB. APK. ordering. Further improvements. such as shortening turn around time. ALT consists of a heuristic. described later. mentioned above. It updates the data base so that the EC may be reflected on the schematics correctly. WDS contains such functions as minimum spanning for each net. EC-W: A program for managing engineering changes (EC) of wiring information.no Figure 6 Κ. and also generates the specific wiring document if the hardware is under manufacturing. TERAMOTO Figure 7 Packages used in DIO ESS Mainframes of DIO ESS DOC: A schematics (logic diagrams) drawing program for package and frame levels. ΚΑΝΙ. cabling.000 source code lines written by PL/1 subset and assembly languages. only truth value simulation of manually coded test data is performed by the functional simulator. algorithm[2] and a parallel fault simulator. M. For BWB routing. called "MO-ΜΙ". For a package which contains a blackbox LSI as a microprocessor. ALT. toPackage Functional Testing to PWB Manufacturing to Wiring and Testing BWB 'Artwork) oota y ItO BWB Manufacturing .CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 8 ESS design process using CAD system 111 to Equip Testing. 112 PWB and B WB Layout Design K. KANI, A. YAMADA, M. TERAMOTO For development of ESS, many new packages to be designed are required, so auto mated layout is important. The automated package layout program, APK, has been continuously improved to cope with the increasing complexity and the technology changes. Its latest version has the following features. (1) Various types of board can be treated by defining a geometric file and some parameters. The 400mm χ 400mm board, on which 2 lines go through between adjacent lands, is the current maximum size in practical application. (2) Main functions are IC placement, routing, design rule checking (APK); digitized input, pattern correction (IDS); and artwork data generation (PASSÌ (3) Algorithms for IC placement are the pair linking method for the initial placement, Steinberg's assignment method and the pairwise interchange method for the iterative improvement. (4) Routing algorithm is a generalized line search method which can vary its routing characteristics from the original line search method[18] to Lee's method according to the given parameters. The routing program is implemented so that the parameters may be changeable during the routing steps in order to get solution economically. (5) Three types of via (feed through), i.e., floating via, fixed via and via whose position is limited by power and ground plane, are selectable. (6) For CML circuits, some special functions, such as unicursial (no branching) spanning, placement limited by line length and terminating resistor assign ment, are taken into account.. (7) B ecause of its generic characteristics, the router is also applied to B WB routing practically. At this time, package placement on BWB is performed manually. Examples of routing results, which are used in the current ESS system, are shown in Table 2. The average density is approximately 1 square inch per IC in this example. When the density rises to 0.7 square inch per IC, the maximum density for the board, the routing rate will be reduced to around 92%. Table 2 Examples of PWB routing results Board size No. of lines between adjacent lands No. of ICs Average No. of lines to be connected r best No. of incomplete lines Average routing rate ] average I· worst 210mmX190mm(70ICs) 2 31 ν 61 395 0 9.2 25 97.7% CAD IN THE JAPANESE ELECTRONICS INDUSTRY 4. NEC's CAD SYSTEM FOR LSI LSI Technology 113 In NEC, the beginning of the LSI age started in 1970, when desk calculator LSI chips were developed. At that time, the first stage of LSI-CAD systems was prepared. From that time, CAD technologies have become much important as the number of circuit elements per chip increases. At present, 5,000 - 10,000 gate microcomputer and 16,000 bit RAM chips are representative of high volume production LSIs. Also, R&D activities are accelerated due to the Japan's MITI (Ministry of International Trade and Industry) VLSI project and the NTTPC cooperative project with Fujitsu, Hitachi and NEC. An example of a high speed bipolar 8 bit LSI processor chip is shown in Fig.9, which was recently developed by ECL and NEC[1]. This 4.5mm χ 4.5mm chip contains about 5,000 transistors and 5,000 resistors, which are interconnected with the three layer wirings. LS I-CAD System The LSI CAD system is composed of the programs which are shown in Fig.10. An LSI is designed in the following way. First, the basic blocks (AND gate, Flip Flops, Registers etc.) are designed manually, checked carefully by the circuit analysis program (COSMOS for MOS, SPICE[19] or NECTAR[11] for bipolar), and stored in the block library. Then, the chip layout begins based on the logic diagram, which is verified by the logic simulator, L0G0S[17]. For high volume production LSIs, the manually designed layout is digitized, checked and modified on the graphic system, Applicon or Calma. For small volume production LSIs, the automatic master slice layout design program, MASTER, can be used. For artwork data verification, the DRC (Design Rule Check) program and logic verification program, PALMS, have been recently developed and used. But these are not yet economical. Figure 9 Bipolar LSI processor chip 114 K. KANI, A. YAMADA, M. TERAMOTO Figure 10 u TEST TAPE EDITING (LOGTEG) CAD System for LSI MASK ROM BIT PATTERN CIRCUIT ANALYSIS (COSMOS) (SPICE) (NECTAR) LOGIC / — - \ DIAGRAM LOGIC SIMULATION (LOGOS) ROM POST PROCESS (AROM) 1/ J" rcbt BLOCK LIBRARY TEST PATTERN GENERATION (FOCUS) (PTS) LAYOUT DESIGN (MASTER) ARTWORK DATA EDITING (APPLICON) (CALMA) ARTWORK CHECK (DRC) (PALMS) ARTWORK TAPE Table 3 Main f e a t u r e s o f the NEC's LSI-CAD programs Program COSMOS NECTAR LOGOS MASTER Purpose Main features Built-in M O S Model,Nodal.Implicit Integration Piecewise Linear Approach,Modified Tableau Unit,Min,Max,Rise,Fall and Wire Delay. 3-Value Master s l i c e , Two-stage Routing Min.Spacing,Min.Width,Enel os ure Checks P a r a l l e l , 4-Value, Stuck at f a u l t s Generate Artwork data and Test tape Generate Test Tapes from Common File C i r c u i t Analysis C i r c u i t Analysis Logic V e r i f i c a t i o n Automatic Layout Artwork Check Fault Simulation Mask-ROM postprocess Test Tape Editing D R C FOCUS AROM LOGTEG CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 11 Increase in LSI development period as the number of components per chip increases Figure 12 115 Total computer run time for LSI circuit analysis in the NEC IC Division (1972=1) O OJ CL C OJ Q> Q. O E repeating cycle / (debug ly'' initial cycle ( design, wafer process a test ) N o . of components / chip Another flow is to prepare the test data. The automatic test pattern generator, PTS, which was developed in the NEC Computer Engineering Division[33], is not sufficient for the complex LSI. Therefore, the simulator, FOCUS, is used to verify the manually designed test sequence. Mask-ROM bit patterns can be automatically broken down to the test tape and the artwork data, by the AROM program. From the manufacturing standpoint, the artwork data editing system,'Appi icon or Calma, and the test tape editing program, LOGTEG, are important, because a set of typical artwork data is composed of 100,000 rectangles and a typical test sequence contains 10,000 patterns. The main features of the above mentioned programs are summarized in Table 3. There is a tendency that as the number of components per chip increases, the LSI development period becomes longer as shown in Fig.11. This is caused mainly by the increase in the repeating number of the design-waferprocess-test cycle, due to various kind of errors. Therefore, the LSI CAD should aim at reduction of this repeating cycle number in addition to reduce the initial cycle period. LSI Circuit Analysis During the late 1960s, IC designers started to use the circuit analysis programs which had been developed in NEC. During the early 1970s, at the beginning of the LSI age, the total computer run time for the LSI circuit analysis increased rapidly because of the increase in design data amount and accuracy as shown in Fig.12. The program performance has been improved more than 100 times during this ten years. As mentioned above, at present COSMOS, which has an accurate built-in MOS model, is used for MOS LSI design verification. SPICE, which was developed in the 116 K. KANI, A. YAMADA, M. TERAMOTO University of California, or NECTAR developed in the NEC Central Research Laboratories, are used for bipolar LSIs. NECTAR is a unique program which guarantees obtaining DC solution whenever it exists[22]. A problem is that the maximum circuit size which can be analyzed economically by these programs, is limited to about 200 gates. M0TIS[14] seems a good idea to solve this problem if its computational error can be evaluated more theoretically. Statistical analysis and parameter optimization are important, but they are not yet economical in the LSI Division. 5. NOTABLE CAD FEATURES AND ACTIVITIES IN JAPAN The notable CAD features and activities of Japanese electronics companies are as follows. Total Features (a) During the early 1960s, major Japanese electronics companies started to develop their own CAD systems on their own computers. Since then, the CAD systems in each company have been almost in-house-made and have rarely released to the outside. Thus, CAD program circulation is usually limited in Japan. (b) During the early 1970s, most of the above companies purchased interactive graphic design systems from the U.S.A., such as Appi icon, Calma, and Computervision systems. They have used these systems as error checking and minor changing of PWB or LSI artwork data, and as the stand-alone systems which have magnetic tape interface between their large computer CAD systems. (c) Remote TSS terminals have been popular from around 1975, but remote graphic terminals have not yet become popular. Improvement in the convenient environment for CAD users, including the above, seems to be too slow, compared with the U.S.A. (d) As an exception of the independent CAD system described in (a), Electrical Communication Laboratories (ECL) of NTTPC has researched many CAD systems, particularly for ESS, and has cooperated their development with Fujitsu, Hitachi, NEC and Oki. (e) As a part of the VLSI project, Japan's MITI supports for development of CAD systems for VLSI from 1976 to 1979. Activities in each CAD Technologies Field (f) Long after the two well known circuit analysis programs, i.e., NET-1 and ECAP, were made in the U.S.A. from 1964 to 1965, several Japanese companies started to develop such programs. At present, NTTPC's ECSS[27], Fujitsu's FNAP[13], NEC's NECTAR etc. are released. Also, ECAP, ASTAP and SPICE, which were made in the U.S.A., have been widely used in the Japanese electronics industry. (g) In transmission equipment design, analysis and optimization programs for linear circuits, such as filter, equalizer, etc., have been used since 1957. The linear AC optimization algorithm was greatly improved by an iterative Chebyshev approximation method developed in NEC in 1968[7]. Also, linear AC tolerance assignment programs have been used since the early 1970s. (h) For the device simulation, two studies from around 1970 are worth noting. One is a modeling method, which reduces semiconductor device analysis to a lumped network analysis[8,23]. The other is a numerical study of the AC characteristics of semiconductor devices[16]. Recently, two dimensional CAD IN THE JAPANESE ELECTRONICS INDUSTRY analysis of the bipolar and CMOS devices have been made in ECL[26]. 117 (i) For LSI chip layout design, many efforts have been made, as shown in Table 4. Generally speaking, building block layout programs are not yet practically economical. However, the master slice layout programs are effectively in use. The reason is that the former is required to minimize the chip size, while chip size is fixed in the latter. Graph theoretical investigations of the layout problem seem to be active in Japan[10]. The artwork data verification methods are being studied in many companies recently[6,35]. Table 4 LSI chip layout programs developed in Japan Program[Ref.] Developer LILAC[15] TAPLS[20] R0BIN[9] CAD75[3] [24] MARC[31] Hitachi Toshiba Main Feature includes partitioning, assignment and Building Block routing convenient for interactive graphical Building Block design chip area during routing Building Block minimize process interface with graphic design Master Slice offline system Master Slice online graphic display two stage routing, offline interface Master Slice with graphics Mul ti chi ρ LSI two stage routing Technology MOS gate array determine ordering of gates NEC Hitachi Oki NTTPC, F.H.N.O COMPAS[28] NTTPC, NEC BLOOM[36] NEC (j) Sharp Co. and the University of Osaka have developed a minicomputer-based PWB layout design system[21]. This system can be used as both iterative automatic placement and wiring and as interactive design. The one-layer curved line boards, which are usually used in the analog system, are tractable in addition to the two-layer regular boards, which are usually used in the digital system. (k) CAD data base has two features in Japan. First, there are two approaches: one is to develop the special data base designed for CAD as described in Sections 2 and 3, and the other is to utilize the general purpose data base[25]. Second, most of CAD data bases have not been logical, but physical design oriented so far. (1) Recently, functional simulator or register-level simulator have been developed and used in several companies[29]. Keio University has developed a multi level simulator in which a parallel value simulation technique is taken into account[30]. Concluding Remarks There is a rapid progress in semiconductor integrated circuit technologies, i.e., from LSI to VLSI. Therefore, the LSI CAD technologies must advance further hereafter, especially in the layout and test pattern generation fields. And, IEEE ISSCC (1974) 46. Kani.Tabuchi: A Computer Program f o r Optimal Routing of Printed C i r c u i t Connectors.Nishida. Kurata. (1977) 1-11.(1978) 167-175.Y.Math..Y. [21] Nishioka.Ozawa and I.H.K..K.R.S.Chawla: Operational Features of an M O S Timing Simulator.Sato: Advanced LILAC-An Automated Layout Generation System for MOS/LSI. Kani.Shirakawa and H. KANI.Y.A building block LSI routing program.K.Suzuki: Master Slice LSI Computer Aided Design System.(1975) 95-101.Nakamura and K.Kimura: A High Speed 1600 Gate Bipolar LSI.N. IEEE Trans.Takahashi.L. no.simulation programs with integrated c i r c u i t s analysis.vol. Kurobe.T„H.24. Kani.H.and T.. Igarashi.Wakatsuki and A. and K.118 K.16.W.no.T.I.IEEE Trans.Pederson: SPICE.Y.J. IEEE ISSCC(1978) 208-209.J.H.M.G.Watanabe: An I t e r a t i v e Chebyshev Approximation Method for Network Design.Ishiga.Koike: High Packing Density LSI Layout System with Interactive F a c i l i t i e s .9 (1972) 1028-1037. and H.T. IEEE Trans.H. Nagel.Gumme 1 and B.T. References LI] L2] L3] [4J L5] L6] [7] L8] [9] [10J [11] [12] [13] [14] [15] [16] [17] [18] L19J L20J Akazawa. 12th D A Conf.T..Numata: The Design of Data Base Organization f o r an Electronic Equipment D A System.(1978). Kozawa.S.ED-18.T.Watanabe:CARD(Computer Assisted Research and Developement) for Electrical C i r c u i t .ERL-M382.Ohtsuki:NECTAR2-Circuit Analysis Program based on Piecewise Linear Approach.vol.IEEE Trans.Ohtsuki: Graph Theory and Combinatorial Algorithms f o r Design Automation..CT.i.Ozaki: A Minicomputerized Automatic Layout System for Two-Layer Printed Wiring Boards.Kamikawai. h .L0G0S2.Kishida.Nemoto. Funatsu.Yasuda: Placement and Routing Program for Master-slice LSI's.T.Tsuboya: A Heuristic Test Generation Algorithm f o r Sequential C i r c u i t s .vol.Kawakami and M.T. of California (1973).Yoshida.CT.Kurimoto.Fukushima and T.K. (1976) 336-343.T.R.T.4 (1968) 326-336.vol.H. Arima.T.O. 11th D A Workshop (1974) 169-176.and T.. 11th D A Workshop (1974) 19-25. Kojima.Annual Test Conf. Univ. 14th D A Conf. M. Kawano.I FI PS (1968) H47-50..and A. Journal of Information Processing Society of Japan.K„Y. 11th D A Workshop (1974) 26-46.Y.K.l (1970) 26-32.S. ED-19. IEEE ISCAS (1976) 658-661.ED. SIAM J.L. Monograph of Technical Group on D A of Information Processing Society of Japan. Kozak..Wakatsuki and T.: A Small Signal Calculation f o r One-Dimensional Transistors.Kani: LSI Logic Simulation System.Anal.DA31-2 (1977) (Japanese)..IEEE ISCAS (1975) 92.Kano and T. [23] Ohtsuki.no. Nakada.no.H.Murakami and K.Ikemoto.νοί.K. A.and K. [24] Ozawa. YAMADA.Kumagai: Existence theorems and a solution algorithm for piecewise linear resistor networks..P.13th D A Conf.ED. TERAMOTO Computer and ESS CAD technologies must greatly be changed in order to avoid the VLSI chip redesign cost and time.no.8 (1977) 69.no.Yokota: A Nonlinear Lumped Network Model of Semiconductor Devices with Consideration of Recombination Kinetics. 12th D A Conf.X.CT-15. 13th D A Conf. v o l .Sugiyama: Correction and Wiring Checking System for Master Slice LSIs. and K.Arima: Test Generation Systems in Japan. 15th DA Conf. [22] Ohtsuki..Okuda.3 (1971) 200-210.Kawanishi and A.Yamada:Designing Digital Circuits with Easily Testable Consideration.K. Funatsu.Fujisawa and F. Ishizaki.Horino.Amamiya and M.A. Chiba.(1976) 245-250.vol. Kawakita.6 (1975) 526-537 (Japanese).CT-17.(1975) 114-122.and D.N.M.Kodarna.Sudo. Memorandum No.Sakemi and S.K.7 (1973) 175-189 (Japanese).Shikata and K. Mikami.I.X.Kani: A Unified Modeling Scheme for Semiconductor Devices with Applications of State-Variable Analysis. Fujitsu.Kishimoto: R0BIN. of Information Processing Society of Japan (1978) 607-608 (Japanese).Journal of IECE of Japan.K.T.T.Teramoto: Routing Program for Multichip LSIs. (1978) 418-427.Ohara: A Module Level Simulation Technique f o r Systems Composed of LSI's and MSI's. National Conf.Wakatsuki.Itoh.Kani: A Heuristic Procedure f o r Ordering M O S Arrays.7. [34] Yamada..(1977) 78-83. [29] Teramoto.A.Mitsuhashi.K.Fukui and S.(1974) 372-379.Kani and M.Seo. 14th DA Conf.Takahashi and S.M.Sato.Kawaguchi.M.Ishigami.Shibano.Tanaka..M.61. and Y.15th D A Conf..Funatsu: Automatic System Level Test (1978) 347-352.H.K.T.K.Chiba. 12th DA Conf. National Conf. [31] Ueda.Ogita and S. National Conf.CAD IN THE JAPANESE ELECTRONICS INDUSTRY 119 [25] Soga. [28] Sugiyama.T. Generation and Fault Location for Large Digital Systems.H.Tsuji: Engineering Data Management System(EDMS) f o r Computer Aided Design of D i g i t a l Computers.N.Wakatsuki.H. USA-Japan DA Symposium(1975) 87-94.Y. (1975) 384-393.A.M.Kato: Microprogramming Design Support System.K.H.0. [26] Sudo. : Analysis of LSI Devices and Their Models. [27] Sugimori.11th DA Conf. [33] Yamada.N..15th DA Conf.14th D A Conf.K.E. [36] Yoshizawa.M. vol.Sugiyama: LSI Layout and Wiring System.M.Tamura.Tornita and S.C.MARC.T.A.11th D A Workshop (1974) 137-142.Tsuboya and N. [35] Yoshida.(1978) 714-724(Japanese).Nakada. . (1977) 322-330. [30] Tokoro.Y.Nakatsuka: A Layout Checking System for Large Scale Integrated C i r c u i t ..K.M. of IECE of Japan..Funatsu: Automatic Test Generation f o r Large D i g i t a l Circuits.Tabuchi.K.Kawanishi and K.Ueda.of IECE of Japan(1974) 1784(Japanese).Kunioka and H.Ishimatsu and H.M.no. : D E M O S E C i r c u i t Analysis Program.ECSS..A..(1976) 421(Japanese).Koganemaru: RTL Simulator f o r Modular Design.. [32] Yamada. . CAD. FREEMAN. United Kingdom . Centre.TECHNICAL SESSION III Chairman: G. . are critical to the successful deployment of a computer aids to design system. It is the intent here. More specifically: 123 . with particular emphasis on the support of a growing. INTEGRATED CAD SYSTEM Fred Hembrough Manager. • • CAD application support. A technical challenge in that the system must be responsive to emerging hardware technologies and analytical techniques. editor. development and production activities. These will include: • • Interface with user. While this paper will address the technical aspects of a large CAD system. Massachusetts USA Much emphasis has been placed on the development of sophisticated algorithms and software tools to support the design automation process. a number of aspects. System evolution. As CAD progresses from the realms of a research tool towards integration into the design. to describe those aspects which. How does CAD fit into the hardware design cycle? CAD software development procedures. both technical and nontechnical. CAD/CAM Software Department Richard Pabich CAD/CAM Program Manager Raytheon Company Bedford. installation. Responsiveness to changing technologies and user requirements. based on the author's experience. diverse user community. it will also address a number of points related to the introduction of an integrated CAD system into a large multifaceted electronics company. Uusgrave. Acceptance of CAD training of system users. a managerial challenge in that acceptance of automation depends not only on the provision of responsive technical capabilities but also on the education of potential users both at the managerial and designer levels. Development of design goals. Bruneis oi digitai electronic circuiti and iyitemi Nonth-HoUand Publishing Company 6 Luxembourg. The paper will examine these topics as related to the introduction and evolution of the Raytheon CAD system. and acceptance of a computer aids to design (CAD) system must be viewed as both a technical and managerial challenge. © ECSC. COMPITTER-AIPEP PESIGN 1979 ASPECTS OF A LARGE. EAEC. INTRODUCTION The development.G. documentation requirements . EEC. must be addressed in order to fully realize CADs enormous potential. The character of the automation support is dependent on the nature of the product to be designed and the processes used for fabrication. The development of a clear understanding of present and projected product mix with the emphasis on the technical aspects of the automation support requirements. customer LSI design versus commercial IC. • These steps are critical in the long-term success of a CAD system. PABICH Technical Capabilities Applications Support Software Engineering System Support TECHNICAL CAPABILITIES A large scale CAD system can be viewed as providing automation support throughout the hardware design cycle. is depicted in Figure I. An idealized view of the hardware design cycle. the means to pass data from one design phase to the next with minimal manual intervention. verified by logic simulation. etc. important first steps in the "tailoring" of a CAD system are • The development of a clear understanding of the design process flow with the emphasis on the data interfaces between supporting organizations such as engineering. An integrated system provides not only automation support. logic interconnection data. the introduction of automation into the design cycle can be a gradual process so that individuals and organizations can adjust to changes . also. once a plan is formulated. For example. but. R.1 2 4 F. or document control. for instance. together with some common automation support capabilities. thus reducing processing and throughput time. printed circuit boards versus wirewrap boards. can serve as input to the product design phase. However. drafting. HEMBROUGH. high speed versus low speed logic. SYSTEM/ SUBSYSTEM DESIGN SYSTEM REQUIREMENTS SYSTEM SIMULATION FUNCTIONAL SIMULATION RELIABILITY ANALYSIS MODULE DESIGN (ELECTRICAL) PRODUCT DESIGN FABRICATION TEST A N D EVALUATION RELEASt TO PRODUCTION / GATE LEVEL SIMULATION CIRCUIT ANALYSIS MICROWAVE ANALYSIS LOGIC D I A G R A M M I N G PLACEMENT AND ROUTING MASK M A K I N G N / C MACHINE TOOL CONTROL FABRICATION DOCUMENTATION AUTOMATIC TEST GENERATION AUTOMATIC TEST EVALUATION AUTOMATIC FAULT ISOLATION DATA GENERATION TEST TRANSLATION CONTINUITY TESTING Figure I . SYSTEM TAILORED TO NEEDS The key point here is that automation support requirements are dependent on the nature of the product to be designed and developed and on the supporting facilities and processes. testing. So. CAD AT RAYTHEON Raytheon became involved in CAD in 1964 when it obtained the Electronic Circuit Analysis Program (ECAP). AC and piecewise nonlinear transient analysis. was a simple. In the late 60's ECAP was augmented by a number of circuit analysis programs such as SCEPTRE. it became possible by the CDL to GRASS interface to use data verified at the functional level to verify gate level designs. now called Raytheon Circuit Analysis Program (RAYCAP). Since 1974. Also. Systems for automatically placing and routing microelectronic devices were made available. LMAD. an analog circuit designer could perform both nonlinear DC and transient analyses. Raytheon first began applying automation techniques to product design in 1968 with the development of a system that took digitized data and transformed it into artwork and documentation. Functions of a Complex Variable and a technique for automatic generation of model parameters for nonlinear devices had been added to the system. the system capabilities were expanded and the concepts of integration and data management were introduced. capabilities were expanded to include a Root Sum Square (RSS) analysis. i. By 1977 such capabilities as Load Analysis. and sensitivity analysis. INTEGRATED CAD SYSTEM in their way of doing business. 125 As users are trained and as acceptance is won.e. In this time frame. In 1975. two-state gate level simulator. It was called CDL or the Computer Design Language. considerable effort has been applied by Raytheon to increase the capabilities of AEDCAP. Digital designers obtained their first verification tool in 1968 when the LogicMachine Aids to Design (LMAD) program became available. With one common circuit description. Fourier Analysis as well as Monte Carlo Analysis. Automation tools. initial condition generation for capacitors and inductors and a worst case analysis. In 1970. CIRCUS and ΝΑΞΑΡ. logic 1. By 1976. That was the year that Raytheon's first stand alone PCB routing system became available. The most significant occurrence in this time period was the introduction in 1976 of the TOTAL data base management system. As the users became more sophisticated. The data in the LMAD system became the control point and data source for other programs such as an automatic test generation system for combinational logic and a computer aided routing system. In 1973. the concepts of data bases and data base management began to emerge. LMAD was a more powerful fourstate simulator. an interactive system for microelectronics mask making that placed and interconnected array structures was introduced. TOTAL provided the repository for all data and transactions in the CAD system. In 1968. AEDCAP. logic 0. During this period. the program. It is the one capability that transformed a group of CAD programs into the integrated RAYCAD system. Race Analysis and Worst Case Timing had been added to GRASS. at that stage.ASPECTS OF A LARGE. This system provided extensive simulation and test generation capability. which could show early cost reductions or which could clearly increase productivity. it became obvious . LMAD was integrated with a test grading capability and renamed the Grader/Simulator System (GRASS). in 1973. undefined and high impedance. were introduced first.. a rudimentary analog circuit simulator. From 1973 to 1977 efforts were extended to refine both CDL and GRASS in both technical capabilities and execution. the task was initiated to automate the process of developing the data that input artwork generators as totally as possible. and more demanding. the system can be expanded. The technology and techniques developed for microelectronic work were applied in 1975 to printed circuit board (PCB) work. became available. In 1977. and in 1974 a truly sophisticated analog simulator. By 1971. small signal linear AC analysis. The process of taking a verified or released design and producing all the artwork and documentation to fabricate that design lends itself extremely well to the automation process. This program could perform only DC. A description of Raytheon's introduction to CAD illustrates this point. a tool to perform digital verification at the architectural level was introduced. REDAC. and as hardware designs using standard integrated circuit types became common. a batch routing system. it is important that RAYCAD personnel and project management meet to develop a detailed CAD supported project plan. Thus. As the RAYCAD system evolved. or have experienced the benefits of CAD and now consider CAD as an integral part of the hardware design cycle. today. To this end the first turn key interactive graphics system for two layer work was purchased in 1972. the two primary Raytheon circuit board routing systems can now automatically accept design data that has been completely verified through simulation and stored in the data base. AFTER could not be used. i. However. who ultimately are accountable for the success or failure of a task. R. automatically produced an optimal functional test for any digital design that was combinational in nature. of great significance. Draftsmen. and Technicians. PABICH that two types of systems would be necessary to satisfy the different types of technologies being employed in the company. the test and evaluation system was integrated with the data base management system in 1976.126 F. was integrated with the data base management system. Therefore. integrated around a data base management system. and was enhanced to include interactive completion of routing solutions. A method of dealing with sequential devices had to be devised. data is automatically passed from one phase of the design cycle to the next. HEMBROUGH. . In 1977. making use of Roth's algorithm. The technologies and techniques used for microelectronics and two layer PCB work at MSD were melded in 1976 into the multi-layer Computer-Aided Routing of Interconnections (CARI) System. a test grader system was developed to evaluate the completeness of manually prepared tests for sequential circuits. a batch routing system for large volume and multi layer PCB design efforts and interactive systems for small volume and two layer PCB work. Because each project has different schedule requirements and technical goals. As the user base expanded. As such. that a REDAC system was purchased. This was the same test grader that was merged with LMAD to form the GRASS system in 1973. have been made aware of. to the greatest extent possible. The system is structured such that.. a much more sophisticated interactive routing system became necessary. Raytheon's CAD system (RAYCAD) provides automation tools. Automatic test generation techniques were first introduced to Raytheon in 1971 with the release of the Automatic Functional Test and Evaluation (AFTER) system. Since that time. the concept of an integrated circuit (IC) library emerged. as the volume of new digital designs increased. the high volume and multi layer PCB design problem lends itself more to a different type of implementation. the library concept has been expanded to include not only IC logic information. was also integrated with the data base management system. CARI was expanded to encompass high production two layer PCB work and. In contrast. like CARI. These individuals. As with the design verification and product design system. A key to the acceptance of the RAYCAD system lies in the attitudes of the hardware project management personnel. as digital designs were becoming more complex. In 1972. The beneficial effects of CAD during the design process have been experienced and the system is widely used by Design Engineers. It was then. Here the concept of Applications Support is introduced. in 1975. but also electrical and physical characteristics for both ICs and more complex hardware components. not a sequential device. most became sequential in nature.e. AFTER. Test Engineers. Well tested software reduces the occurrence of unexpected failures due to unanticipated operating modes. but will also reduce the software support requirement. then inefficient use of CAD by project personnel will. RAYTHEON'S APPROACH With these points in mind.ASPECTS OF A LARGE. In an environment where the measure of success is not only technical responsiveness. but also the degree of user acceptance. The relationships of critical activities such as: Requirements Definition Software Design Implementation Testing Design Review Documentation . with the introduction of Modular Design Defined Programming Practices the concept of "maintainable" software can be introduced. we can now introduce the specifics of the RAYCAD software development cycle. have an adverse impact on project schedules and on system design. INTEGRATED CAD SYSTEM 127 The process described above is critically important to the acceptance and efficient utilization of an integrated CAD system. User oriented software provides a user interface which'employs familiar notation and supplies clear user directives and output formats. well defined software development procedures must be established. SOFTWARE ENGINEERING In order to develop automation tools which are responsive to user need. Software developed with these attributes in mind not only will gain wider user acceptance. at worst. the emphasis must be placed on the development of "useable" software. If early definition of CADs role in a given project is not specified. at best. Useable software can be characterized as being: Technically Responsive Well Tested Well Documented User Oriented In addition. Software enhancements can be implemented with minimal impact if modularity and expandability have been considered. limit its effectiveness and. More specifically: Well written user documentation reduces ambiguities in using the software and provides clear direction in dealing with common failure modes. Well defined programming practices and complete software documentation minimizes familiarization time for new personnel. Figure III presents a block diagram of this cycle. Testing requirements and results are reviewed by project personnel to ascertain the completeness of the testing approach. DESIGN TECHNICAL SPECIFICATION REO'TS GENERATION USER DEVELOPMENT - FUNCTIONAL SPECIFICATION GENERATION IMPLEMENTATION TESTING . that the user documentation is sufficient to allow efficient use of the software. some general comments can be made. first. it is appropriate to describe how a general user requirement for a CAD software capability is translated into a detailed specification. also. and to approve for release to the user.128 F. R. The specifics of each activity will be described below. The means by which user requirements are developed will be discussed in a later section. PABICH are depicted. the Technical Specification will contain: Scope of Software Capability Major System Interface Requirements Operating Environment Preliminary User Interface Requirements Preliminary Testing Requirements . to review the test results. The Technical Specification. RELEASE TO USER 1 TESTING 1 REO'TS 1 RESULTS \ / USER REQUESTED CHANGES PROJECT MGMT REVIEW AND APPROVAL MAINTENANCE MANUAL GENERATION USER REVIEW A N D APPROVAL USER MANUAL GENERATION ' Figure III. In final form. The proposed software design is reviewed periodically by project personnel to ensure that the design guidelines are being followed. This ensures that the software to be developed is responsive to predefined user needs and. that the design is responsive to the user requirements. but. HEMBROUGH. Formal user approval is required before software design can begin and before the User Manual can be released. is the end result of an interactive process between user and implementer to formally define software characteristics. which is generated by RAYCAD personnel. however. and that design flexibility has been considered. To be more specific. given a CAD capability. 129 Once the Technical Specification is approved. consideration of the task is deferred until the yearly planning cycle is undertaken at which time. If a user feedback request involves a larger scale system enhancement. In order to fully describe the scope of the RAYCAD activities. The Functional Specification serves as the baseline for the Maintenance Manual which is generated at the completion of the development phase. Scope of Software Capability Detialed Software Interface Design Detailed User Interface Design Detailed Testing Approach Detailed Organization Formal reviews are conducted periodically to evaluate key aspects of the design and testing approach. some additional topics. SYSTEM SUPPORT System support is defined as a long-term commitment of resources. INTEGRATED CAD SYSTEM Detailed design can begin only after the document has been formally accepted (signed) by the user. and other support functions will be addressed in the next section. user training. a system bug takes precedence over a system enhancement. the problems of user training and libraries requirements must be addressed. This activity which must continue independent of any new technical development activities. The level of CAD proficiency of selected project personnal must be determined . and documentation activities can begin. Clearly. the software maintenance activity makes up a significant portion of a CAD system support task. That is. the implementation. The RAYCAD feedback cycle is used not only to report bugs. Each formal feedback document is reviewed regularly by a joint project management/ user representative committee and the requests are prioritized in accordance with a predefined process.ASPECTS OF A LARGE. long-term support of software systems has been viewed as strictly a software maintenance function. When the document is completed and approved. trained personnel were made available to respond to user problems and to implement minor software changes where necessary. work can begin on generation of the Functional Specification which will present a detailed description of the software design. such as user requirements definition. Of equal importance is the long-term commitment to the user to provide a CAD capability which is responsive to state-of-the-art technologies. the scope of the full year CAD effort is formalized. for each project application. More specifically. then. testing. for example. but also to request limited scope changes to the software. is made up of the following major elements: User Training Libraries Maintenance User Feedback System Maintenance In the past. The system maintenance and user feedback functions can be characterized as those activities which ensure that the released automation software is responsive to user needs. The feedback cycle must provide to the user a means of formally reporting software bugs or any other problems or requests which could enhance system usefulness. The library update requirement will depend on the number of elements (logic models. . as well as homework assignments. and the support of the user throughout the design cycle is mandatory. R. The intent here is not to provide all the answers but to enumerate those activities that are critical to the success of a large scale CAD project. In conclusion. HEMBROUGH. SUMMARY This paper has touched on several aspects of a complex subject. a formal request is presented to the support group and a mutually agreeable delivery schedule is developed. then the essence of a successful CAD system has been established. more projects). the benefits of automation must be made clear to upper management. the character of the product development cycle must be clearly understood. Where possible. The RAYCAD libraries support function provides a comprehensive set of common library models to support the design verification and product design activities.. it is imperative that library requirements are presented early in order to allow sufficient time to develop the models. then additional manpower resources must be dedicated to the support task. etc. the material is tailored to reflect applications familiar to those being trained. Because of the increasing complexity of individual library devices. In this way. mechanical models. The training approach taken for the RAYCAD system is to provide a number of indepth courses which correspond to major system elements. the training process can be more easily extended to the actual design environment. A long-term commitment of manpower and resources is mandatory.e. For example: Gate Level Simulation Circuit Analysis Functional Simulation Automated Drafting These courses require up to 30 hours of classroom instruction. the dynamics of the technological marketplace must be monitored.130 F. As the scope of the CAD applications activity increases (i. the development of user oriented software must be a primary goal. PABICH and an appropriate training activity scheduled. The courses are presented periodically with special sessions scheduled on a demand basis. if the needs of the CAD users have been satisfied and if upper management is aware of the benefits of CAD. The models have been thoroughly verified and documented by RAYCAD library support personnel.) to be added to the established CAD libraries. When additional models are required for a new project. CAD supported hardware design cycle. its impact on schedule and cost is -assessed. who are well versed in CAD applications. meet with project personnel to discuss design requirements and to develop a plan for the efficient use of CAD resources. individuals. "DEVALUATION 1 . the design file serves as a means of passing data from one phase to another and is updated as more detailed design information becomes available. Design data file access is limited to only specified users in order to ensure its integrity. As the design cycles progresses. This phase is critical to the successful utilization of CAD in that CAD supported project goals and projected milestones are defined. such as user training and library updates are detailed. with CAD support taking the following form: • During the project planning phase. PROM burn-in data.^ DESIGN DOCUMENTATION FABRICATION AIDS & Figure I I . INTEGRATED CAD SYSTEM APPLICATIONS SUPPORT 1 3 1 This activity melds the hardware design cycle with the CAD system capabilities to form an integrated. the scope of the RAYCAD effort is defined. and other requirements. deliverable documentation formats. Figure II presents a graphical representation of key events during the hardware development cycle. At this point. At the end of the design cycle. the file contains electrical and physical design characteristics. During subsequent phases.ASPECTS OF A LARGE. • • • SUPPORT το PROJECT MANAGEMENT SUPPORT TO USERS PROJECT PLANNING TEST A N D FABRICATION J . and test data for translation to one or more automatic test systems. fabrication aids such as PCB drill tapes. the user training and libraries update tasks are initiated and RAYCAD personnel are designated to support CAD related activities. . . All of the design data are entered and checked for formal errors. Suitable organisational measures are a precondition for the successful application of the procedures. EAEC. COMPUTER-AIPEP PESIGN LARGE SCALE CAD USER EXPERIENCE F. After an interactively performed placement of the components on their boards. Simultaneously development risks are reduced and progress becomes more transparent.oi digitai electronic circuiti and iyitemi Horih-HoUand Publishing Compone ©ECSC. Data which are completely free of formal errors and partially free from semantic errors are provided for the program system PENTA. For example. The resources consist of production investment. Uusgraoe. the conductor paths for the printed circuit boards are generated automatically. Bruiitli S Luxembourg. Outline of the "PRIMUS System" (Fig. EEC. 1979 G. Special emphasis is placed on consistency. editor. 1) The heart of our system is a central data base. personnel training. Therefore extensive checks of the input data are always carried out at the beginning of computer-supported steps. The simulator TEGAS is used for the verification of the functional correctness and timing behaviour of the logical networks. An analysis of the electrical behaviour 133 . application rules and the definition of areas of responsibility and authority. The programs of the RUE system are grouped around this data base. for the support of the development and production of digital hardware products. The result of these considerations is a work flow in which computer aided or automated steps alternate with manual steps. Functional changes are carried out with the help of a change language. Introduction Siemens Data Processing Systems Division has developed and put into operation an integrated system. The system RUE is used in dialog mode to support the functional design. which we call "Arbeitssystem PRIMUS". Klaschka Data Processing Systems Division Siemens AG Munich. completeness and freedom from formal errors of the transitional phases between these steps. By the application of specialized methods. By an "Arbeitssystem" we mean all procedures. Germany An integrated system for computer aided development and production of digital electronic circuits and systems is presented from the point of view of a user. resources and organizational measures which are necessary to coordinate a product development project from the first idea to the finished product. data processing investment and material. The procedures include design and test methods as well as procedures for manufacturing and quality control. they extract data from and deposit data into it. machines and computer programs it is possible to considerably speed-up the product design cycle. where naturally the manual stages are particularly critical. For this purpose the current descriptions of the networks are extracted from the data base. The D-LASAR system is used for the generation of test data. The post-processor PERFEKT generates manufacturing documentation and data media for the control of NC-machines. PRIMUS MANAGEMENT Φ* > > status report IDEA REALIZATION PRODUCTS Figure 1 . KLASCHKA of the complete networks concludes this phase.134 F. Change data for printed circuit boards and back panels which have already been produced are generated by the post-processor AEDIFF. plug ins. 2) The whole process is broken down into four phases: functional design physical design physical realization and test test data determination 135 logic design gip diagram îsigrmems for ' physical assignments realization Jjzati PRIMUS technology design padagng circuit technology dialog initial input and changes 3L PRUBAL-PRUPFE formal checks V V < « > master file PENTA placement routing 'master file analysis TEGAS logic simulation central database D-LASAR G AEDIFF change instructions change equipm.back panel boards PERFEKT documentation. input data for NC-machines NC-machines forj plotting' | ' testing | drilling" i test pattern generation plug in testeF 2" 2 PALOG É imtrtmg wiring Figure 2 . for printed circuit boards.LARGE SCALE CAD USER EXPERIENCE Phases of the Development Process (Fig. The technique of design verification by simulation on the gate level has reached a high standard. application fields and compatibility. which we call system architecture. In the case of larger complexes the logic diagrams are entered with the help of a digitizer. KLASCHKA Phase 1 : Functional Design The system level requirements for a data processing system are determined by general properties. transfer rates and interrupt behaviour as well as type. The logic networks stored in the data base can be plotted in the form of logic diagrams at any time. checking and updating these detailed planning data. It covers. The planning of the register level follows the determination of the system architecture. the generation of a number of lists which provide the complete current status of all networks. When a development engineer has reached the gate. like performance. . All network data are described in the input language LOGOL. component locations or maximum signal delays. electrical signal transmission characteristics . if necessary. geometrical and electrical characteristics . The library contains descriptions of: . throughput. the data registers. are described on this level in categories like processing capacity. These data can also include assignments and physical realization constraints. Logic changes and corrections of planning errors are described in the AESPRA language and carried out in dialog mode with the RUE system. e. above all. the RUE system supports the verification of the design with a great number of plausibility and completeness checks. standards for documentation which is to be produced automatically.136 F. but maximum and minimum ones as well. and entered in dialog mode on a display terminal. data paths and the connections existing between them. The logic networks are verified for functional correctness with the help of the simulation system TEGAS. his logical network will be entered in a data base of the RUE system. Computer aid in documenting. number and capacity of background storages and peripheral devices. becomes imperative. This involves comprehensive plausibility checks and. or component level in the design process. The transition of the design from the register level to the gate or component level leads to a considerable expansion of the amount of data. The characteristics of the system. Timing errors like spikes and hazards of the networks are thus recognized. Thanks to the performance of the computers and simulation programs now available. the packaging technology . integrated and discrete components with their functional. In connection with the data of the central library. it is possible to consider not only typical gate delay times.g. the PENTA program system analyzes the electrical behaviour of the physical networks. then the remaining connections are generated with discrete wires. The design errors determined with the support of appropriate test and system software are not manually corrected. Each functional change as it occurs during prototype testing is formulated by the designer in AESPRA and entered into the data base. Paper tapes. The conductor path layout for PC boards and back panels is done automatically. Phase 3: Physical Realization and Test The physical design is conclued with the generation of a PENTA master file. Subsequently. the postprocessor AEDIFF produces automatic change instructions and data media for the control of change equipment. connections and paths. Using this comprehensive data base the manufacturing documents and data media for the control of NC machines are produced. number of signal layers. If occasionally the available wiring capacity is insufficient for the realization of all signal connections as printed conductors. After the prototype is built. in so far as these are determined by packaging and circuit technology (conductor width. With the help of appropriate programs. with allowance being made for electrical and geometrical boundary conditions. The PC boards and back panels for the prototypes are produced step by step on a series of automatic and semi-automatic NC machines. in addition to the functional data. The change data are used in the development and test center to update PC boards in full compliance with established production engineering standards.LARGE SCALE CAD USER EXPERIENCE 137 Phase 2: Physical Design When the logic networks in the data base have attained the level of maturity necessary to start layout of PC boards and back panels. so that. Using the RUE system. wiring machines and automatic testers. and reports in the form of fault lists any violations of the logic circuit design rules. at the end of its test . but also physically. it is submitted to an extensive functional test in the development and test center. By proceeding in this way. The change data are generated by comparison of the current data base contents with the previous valid state. Magnetic tapes control electronic and mechanical plotters which draw lo£ic diagrams and assembly plans for PC boards. The developer's assignments with regard to location and distances between components are taken into consideration. This master file is transferred via a conversion routine into the central data base of the RUE system so that the data base now contains all physical data. as well as check plots of board layout and photomasters for printed boards. and magnetic tapes are used as control media for multispindle drilling machines. etc. All faults can then be eliminated by means of manual interventions using modification instructions. The result is presented in a computer-generated assembly plan. PENTA processing is started. the prototype is updated not only functionally. automatic insertion equipments. The placement of components on the PC boards is carried out interactively. all necessary corrective measures on PC boards and back panels are carried out with computer support.). which shows the locations of all components on the PC board. If the test quality thus obtained is insufficient. the development of the new compact computer models 7. The time required for these is reduced by about 50%. first production model 30 % . prototype test Ô χ O. A prerequisite for this success is the intensive training of all personnel and the strict observance of procedure rules.000 series.5y Or¿ 0. Our experience with the "Arbeitssystem PRIMUS" has been highly positive.138 F. the design data at the end of this phase have reached a high quality standard. Further application areas will follow.722 and 7. in spite of continually increasing demands for performance and capacity. the test patterns will be completely generated using DLASAR. ip—" — physical Afunctional design^ design p.760. further bit patterns can be generated using the DLASAR system. we reckon with a reduction of the product design cycle for the development of complex digital hardware systems of up to 30%. 3): The time required for functional design cannot be significantly reduced. The system was first applied for the development and production of the central unit models 7. physical design and prototype manufacture. PRIMUS also is used for the development of laser printers. Progress can be seen from the fact that. Phase 4: Test Data Determination The RUE system provides the network description of the test specimen directly from the data base. This has a decisive influence on the following stages. The expenditure for the development and use of this system is more than justified by a considerable rationalisation of the entire development and production process. KLA SCHKA it represents in fact the first production model. A lto gether.7z Fig. When we compare current experience with the time required for development of the first central units of the 7. Furthermore. the time from start of prototype test to delivery of the first production machines is shortened by about 30%. in particular by the substantial reduction of development time. Experience with the PRIMUS System The PRIMUS system with the capabilities described has been in use in the Data Processing Systems Division since 1976. If a functional simulation has already been performed for the unit concerned the bit patterns from the TEGAS result file can be conver ted and prepared for use by automatic testers. Testing and debugging is carried out on the automatic tester PALOG.708 and 7. If neither simulation bit patterns nor hand written test programs are available fora particular device. Next. Due to the methodical and consistent procedures used.718 was supported. 3 r . development risks are reduced and progress is made transparent to management.we find the following relationship (Fig. In the Tele communications Group the program systems RUE and PENTA are used for the development of switching processors. If custom LSI chips are required. Massachusetts. running on a tightly coupled mainframe host/graphics satellite configuration. project risk. EAEC. Difficulties experienced in emplacement of CAD tools such as these are numerous. The traditional benefit claimed for CAD . such as artwork. Uuigrauí. We trace the evolution of a hypothetical CPU design from concept to production. ABSTRACT This presentation is divided into two parts. and need for CAD tools simply to deal with design precision or complexity. Brussels oi digital electronic circuits and systems North-Holland Publishing Company editor.G. etc. barriers which must be overcome. Interface data from several boards are combined to initiate a backplane design (printed or wirewrap) which is also done on this system. INTRODUCTION The needs for computer aids to the engineering. Maynard. PC and backplane data are passed via a common Product Description File to a central CAM group. and manufacturing processes are nowhere as imperative as in the computer industry itself. USA. Luther C. EEC. greater design accuracy. and N/C machine tapes. RTL simulators derived from an ISPS description of the machine are first used to explore the correctness and performance of the proposed design. PC design is performed using the IDEA system. Besides initial cost. The first is an overview of stateof-the-art CAD tools used in designing and manufacturing contemporary computer systems. Data from these are used as input to a highprecision logic simulator. such as a comparison between final artwork and original logic design data. Here. Design quality is ensured by several feedback paths. Abel Digital Equipment Corp. and later to the physical design (interconnection layout) process. Digital systems are today of such complexity that it is both intellectually and economically infeasible to design them without computer augmentation of human skills. difficulty in measuring benefit. design. 1979 COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS Dr. the same logic design path is used. Others which must be examined to assess the overall corporate impact of CAD include faster design turnaround. 139 .benefits experienced. and lack of flexibility to meet new technologies are all barriers. manufacturing process dependent features are added to the design and it is post processed into soft tools.reduced design cost . © ECSC.is often the least significant. SUDS. SAGE. as typified by those at Digital Equipment Corporation. Logic designs are captured using an interactive schematic drafting system. later they are used for microcode development. COMPI/TER-AIPEO DESIGN S Luxembouiy. Data from it are input to one of several interactive chip layout systems depending on the technology to be used. The second part is a discussion of the managerial and business-oriented impact of CAD . which combines interactive design editing with a full complement of automatic algorithms. and then to discuss the managerial and business aspects of CAD. Our goal :s to present an overview of a typical set of tools. reluctant acceptance by users. C. When the computer's architects are satisfied this ISPS is frozen. This description is automatically processed into RTL simulators which are used to explore the correctness and performance of the proposed design. Competitive pressures in our marketplace demand boards of such complexity and . SAGE (11). Microware (firmware) engineers write microcode for the machine verifying via this functional simulator they verify that their microcode. This system interconnects high-performance interactive graphics terminals for the editing of PC designs with a large mainframe host having sufficient computational power and memory space to handle layout data management and complex layout algorithms. logic engineers produce a gate-level logic design for the machine. insuring accuracy without the possibility of errors introduced by hand transcription. Changes and fixes are easily incorporated.140 DESIGN TOOLS L. but marginal conditions and possible timing errors that would be difficult to detect using a hardware breadboard. ABEL The evolution of Computer-Aided Design tools at Digital has followed the traditional "bottom-up" cycle: CAD tools were first introduced at the final stages of the engineering design process and at the engineering/manufacturing interface. Architectural and Logic Design We follow a hypothetical new computer through its design cycle. Intermediate adjustment of gate assignments to packages and connector finger assignments are first made. but simplifies later changes to logic (if necessary) and captures electrical interconnection information in a computer data base which is input to later steps in the design process. Printed circuit layout (if PC is the selected interconnection technology) is accomplished on the IDEA system (1. SUDS offers not only improved drafting productivity. Similarly. The machine's architecture is expressed in ISPS (3). Physical Design Interconnection and component description lists from SUDS are then input to the physical design process. Automatic routines define component placement using both constructive and iterative improvement techniques (7).5). SUDS (8). Final accuracy of the logic design is ensured by driving both this gate-level simulator and the earlier functional simulator with identical input sequences and comparing their outputs. Today they are available to the engineer to assist him with everything from his earliest conceptual explorations to final release of a completed design to volume manufacturing. Their description is captured via an interactive schematics drafting system. powerful algorithms of both the basic line routing variety (9) and a unique typologically based router (6) can be used to layout the interconnections themselves. will produce the desired external (user perceived) characteristics. Meanwhile. Driven by both new advances in CAD technology and demands of new design technologies. Machine development now splits into two independent paths which will not rejoin for many months. acting on the machine described. Layout follows the traditional process of first locating components on the board and then routing the interconnections. CAD tools have been made available earlier and earlier in the design cycle. The engineer explores the performance of his design and is made aware of not only outright mistakes in his logic design. giving a precise functional description of the processor via its simulator. Logic design data from SUDS is used to drive a high-accuracy logic simulator. etc. A history of the evolution of a design is kept so that alternative approaches may be recorded and evaluated. The former describes the final product. via the structure of the data base and the relationships it defines. Changes made during layout which affect the schematic (e. Once again.COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS 141 density that the average layout is typically beyond the capability of even the most powerful of contemporary algorithms to process with complete success. Any discrepancies are corrected by the designer before release. lines and pads. It is automatically compared to the original interconnect list from SUDS. gate assignments. interconnections on a schematic versus etch paths on a board). Powerful interactive design editing stations are a necessary part of the design system. A sharp distinction must be maintained between the engineering and manufacturing definitions of a design. technician operators can modify proposed component placements. experiments have also been performed to determine the adaptability of PC layout techniques to gate arrays and other regular logic forms.) typically involve at least an order of magnitude more elements than even the most complex PC design. connections not completed on the PC are fed into the wirewrap system. the latter may include substantial processdependent information which may vary from production site to production site. a human designer must be kept "in the loop" to complete and perfect most designs. insures the integrity of design data and agreement between physical and logical descriptions (e.g. Actual IC layout is done using a variety of technologies and tools. Integrated circuit design is done using a similar process. Other Design Systems Interface data from schematics for the several boards comprising the system are automatically combined into a file defining required backplane interconnections. Design Postprocessing When the PC layout is completed. connector pin assignments. This DBMS. This backplane interconnection list is then input into either our wirewrap system or into IDEA for PC layout. etc. A read-only data base for the design is passed to our Computer-Aided Manufacturing tools generation group. severely straining the capacity (and running expenses'. The result of each design session (many of which are required to complete a PC design) are stored in a data base under the aegis of a powerful data base management system. If a mix of PC and discreet wires are required. Consequently.g. post-design audit and quality control routines check the design for manufacturability (e. gate swaps . . mask sets) . plating bars for PC manufacture) are added. post-design auditing routines check for manufacturability and for agreement between logic (schematic) and physical designs.g. resistors.) of CAD tools.g. and wire routings and can interactively complete the "leftovers" from the automatic routines. A commercial digitizer/editor system is used for custom and standard cell designs. Here. spacing tolerances between lines. An interconnection list is derived from the data base which will produce board artwork.note that none change the logic) are noted in a file which is used to automatically update the SUDS schematic before the final schematic to artwork comparison is made. Even heavier emphasis is placed on logic simulation during design and on post-layout audits of design correctness because of the impossibility of post-design circuit modifications and the high cost of manufacturing tools (e. Here manufacturing-dependent augmentations (e.g.the design must be absolutely correct on the first pass! Custom IC designs done at the circuit element level (individual transistors. Signal consistency between boards is thoroughly checked.). Interactive systems speed design and verify the layout of sheet metal and mechanical subassemblies. The reason is simple: The complexity of designs (and hence the amount of CPU performance. This has remained surprisingly constant in spite of falling hardware costs. store. This is an example of another true advantage of CAD . storage. the high cost of building prototype chips and the impossibility of debugging them has lead to heavy emphasis on design analysis and verification tools . however. an extremely error prone process contributing enormously to indirect design costs. 10) has shown that this goal is rarely met. CAD tools do. the justification for a CAD system is simply that it is impossible to design the product any other way. CAD Benefits The most obvious question is "why CAD?". Eliminated is hand transcription of data. we have repeatedly found that automating a design step has produced heavy. Similarly.especially because it is a highly visible. ABEL The use of CAD at Digital is not limited to digital logic design. MANAGEMENT ISSUES An extensive set of CAD tools such as those described is not emplaced without considerable management forethought and problem-solving. a new generation of tools is required as the capability of the human intellect augmented by today's tools is once again exceeded. and interrelate design data. etc. leading to the digitizer/editor systems so prevalent today. often centrally funded investment. Barriers Barriers to the use of CAD tools.software which will.000 in capital investment. often unexpected demand for its resulting data base. Increasingly. Mechanical and hydrodynamic analysis programs are imperative in the design of disk storage devices.106 elements. maximal productivity from each can be another key to overall corporate success. As we stand on the threshold of VLSI designs containing 10^ . experience at Digital and many other companies (e. has been that each design station requires on the order of $100. However. In a similar vein. We now shift our focus to discuss some of the management issues surrounding the development and operation of a CAD system. Circuit simulators aid in the design of everything from custom IC chips to power supply regulators. required .C. The traditional answer has been that it can substantially reduce direct design costs. reduce design time. as far as possible. Most obvious is cost . In the highly competitive world of computer design and manufacture where baseline technology is changing so rapidly. up-front. Often the new data base is also the key to automation in another engineering or manufacturing process. Our experience. Nowhere is this better illustrated than in the IC design field. while the returns on that investment are often dispersed over time and over numerous projects with independent budgets. ensure the correctness of a design before it is ever manufactured.g.reducing indirect design costs. increased accuracy and pre-testing of a design via CAD techniques can significantly reduce product recall or field revision expenses.142 L. even given the above advantages. Reduced design time also means increased productivity from each design engineer and technician. In an era of severe shortages of educated qualified technical personnel. As designs proceeded beyond a few hundred gates "bookkeeping" type CAD tools had to be developed to organize. Finally. are numerous and must be addressed by management. the speed with which a new technology can be translated into a marketable product is essential to corporate success. once again bolstered by that from other similar corporations. Finally. etc. A thorough understanding of the design process by the system designers. In a recent example. Although this may result in the introduction of CAD into new areas of design. the environment surrounding the CAD program data entry routines. This is due to the very nature of the design trade-off decisions made when implementing a tool and its data base (e. Contemporary CAD tools are so complex that even the most carefully humanengineered and self-prompting system may require weeks of operator training and familiarization. Another expense in fielding a CAD system is the matter of operator training. Less obvious are psychological barriers. The risk of using a new tool is often a barrier. 143 An oft-overlooked problem in planning for the CAD system investment is effect of varying workloads. a sound strategy for coping with peak demands is imperative. Bugs may exist in the software or at least unanticipated deficiencies in its functionality. CAD will always be in a "catch-up" mode.unless adequate fallbacks are available or there is simply no other way to do the task. the flexibility of CAD tools to quickly adapt to new technologies or design rules is limited. and inclusion of CAD lead times in overall project schedules. perceive threat that it will degrade or supplant their own creativity. A radically new tool is far more frequently introduced into the corporation's engineering environment by a "forcing event". early warning about technology changes to CAD developers. Without this planning and anticipation. Users (operators. No engineering manager wishes to jeopardize his project by relying on a new and unproven tool . for it is they who will share in the unexpected problems invariably accompanying the introduction of a new tool. fineness of grids in a PC system). we discovered that the average time to do a design using one particular new tool at Digital fell from twelve weeks to seven weeks during its first nine months of use (2). fear enslavement to the system and loss of control over their own data and work. This necessitates a conscious management change towards better planning. In the capital-intensive CAD environment. utility programs. Surprisingly. The number of designs requested by Engineering can and do fluctuate widely. Inexperienced operators and under-tested processes abound. engineers. Designers frequently feel uncomfortable with the newness of a tool. Rarely is a CAD tool well understood when it is first used. number of layers. desks.COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS to do them) is increasing at a similar rate. getting on and off the system. etc.g.can account for a large portion of this training. The tool (or at least its basic . most often it is an evolutionary process centering on existing tools. CAD Introduction How are CAD tools introduced into a corporation? Most companies plan and budget for expanding and upgrading their CAD tools in accordance with some set of goals and priorities.) standing idle except for peak loads is of little consequence. They must be convinced that the tool will enable them to better solve their real day-to-day design problems and not some abstraction and simplification distilled by the system developers. a few hundred or thousand of dollars of equipment (drafting boards. management) must be educated about the need for and the reasonableness of a tool to the point where they enthusiastically await its arrival. this may require justifying and requesting even more capital than for a constantly-loaded system always at full productivity. In a traditional manual design environment. system crash recovery procedures. good human engineering of the system and involvement of the users in its design from the earliest stages can help generate a sense of partnership. . non-standardized in their human interfaces. Perhaps its cost/benefit is not well established. user training and process changes. for example. perhaps it cannot find the necessary champion to sponsor it at high management levels. and again perhaps a year later when the tool is fully entrenched. though. CAD as Part of the Design Process A CAD tool always exists in a context. Increasingly. A decision by Digital to do some of its own IC designs was the forcing event simulation was imperative to IC design and logic simulation tools were developed. ECL logic. Finally observing data flows can be a key to understanding and improving the CAD-augmented design process. Thus there is a need to examine the surrounding design process whenever a CAD tool is introduced . hybrid circuits. it simply loses in the competition for the corporation's limited internal development resources. This can lead to a significant portion of system complexity and operating time occuring in what should be the most trivial portion of the overall system unless the problem is actively monitored and addressed. they may be difficult to operate. yet it will not be adopted by the company.144 L. and multi-layer PC boards are but a few of the advanced technologies adopted at Digital in the past few years which required planned concomitant CAD tool development. it did not find sponsorship for the development investment and engineering design process changes required to bring it into the corporation. etc. ABEL technology) may exist for years. when the tool is released. etc. While the CAD technology has existed since the early 1960's (4). Why. Latent demand for this type of tool elsewhere in the company surfaces and it spreads until it becomes a part of the general CAD system. We have already mentioned the fact that automating a design step may produce unanticipated demand for its design data base. careful management attention to the overall process is required to ensure that this does not occur. new CAD tools are an integral part of the plan for new product technologies. and crippled by a lack of uniform data definition across systems. Tools can and do affect process. process must often change to maximize the effectiveness of a tool. Once the initial system existed. As a result. The tool must be brought into the company. logic simulation has now spread to permeate the entire engineering environment. These usages should be traced to ensure that the right data is being provided . time spent in simple administrative tasks (passing jobs from one responsible group to another. the surrounding design process.C. should CAD output be pen plotted only to be micro-filmed for distribution when direct COM is available? How many times is CAD generated data hand transcribed for input to another computerized process when a simple interface program could be written? As CAD speeds individual design steps. Then a decision is made to adopt a product technology which absolutely requires this tool for success.) and time lost waiting in queues can dominate total design time.during system design (so required process changes can be identified and planned). An excellent example of this phenomenon is the introduction of logic simulation into Digital several years ago. custom ICs. Often given the least forethought by CAD system designers. Again. other engineering departments willingly paid the smaller investments of expanding the basic system to meet their needs. Most often. logging and filing. Data interfaces between CAD tools become important as more of them are emplaced. Bruce. 9. IBM Research Report RC 3046. Doreau. cit. on Design Automation and Microprocessors. pp. REFERENCES 1.COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS 145 and to understand all impacts if a design data base is changed. and Abel.T.. "A topologically based non-minimum distance routing algorithm". Helliwell.M. L. IEEE/Michigan State Univ. 247-262. Ed. 11. "Simulation hierarchy for microprocessor design". Carnegie-Mellon University. W. "Structure and foundations of a large multi-user. EUROCOMP. IFIPS P r o c . also in Breuer opp. Schuyler. CONCLUSION A CAD system must evolve and grow. L.. and managing the transition to an automated system (which includes everything from initial system design to installation to operational tuning). 1975. 1975.Proc.. 44-49. Mikami. A description of a typical. M. March 1978. M. Interactive Systems .. 3.. R. Abel. Mand Kurtzberg. Sept. Dept. M. 6. 4. Pittsburgh.. Stanford Artificial Intelligence Laboratory. pp. of Computer Science and Electronic Engineering. vol. 109-125. Technology both permits it and demands it. 2. 7. J. 1475-1478. 1977 (no proceedings published). 1978.A. extensive set of automated tools has been presented. Hanan. Breuer. but at times or in formats incompatible with current process and tools. 1968.A..Proc.. October 1978 (no proceedings published). 10. Proc... Barbacci. Design Automation of Digital Systems. Examining data flows can help identify these users and permit changes to satisfy their needs. S. 15th Design Automation Conf. "The Stanford University Drawing System".C. Sept. 1972. Interactive Systems . D. Other groups may need data. 92-99. Feb. 5.C. K. pp. report at (internal) Digital CAD Symposium. Design Automation Workshop. Armstrong. pp. "Device independent interactive graphics in a time shared environment". "Quantitive system cost analysis". ACM Symp.. Κ. Prentice-Hall. 1972.. et al. . EUROCOMP. We perceive two essential keys to success in that growth: automating the right steps in the design process. along with insight into the management problems surrounding their introduction. "The symbolic manipulation of computer descriptions: ISPS Computer description language". April 1970.R.. 1977. 8. Sherwood. "A review of the placement and quadratic assignment problems". pp. and Tabuchi. E. multi-task CAD system". 2. "A computer program for optimal routings of printed circuit connections". Proc. PA. . Leuven university. Belgium .TECHNICAL SESSION IV Chairman: Hugo DE MAN. . 14. IC design consists schematically of the following steps (12. 2. circuit testability. A description of the actions that can be taken in order to verify digital circuit design as it progresses from specifications to physical implementation is provided.39. Uusgraoe.59) : abcdefghidrawing requirements (functional specifications and physical performances) checking specifications for consistency and completeness selecting a functional architecture selecting components for physical realization checking that integration constraints and initial specifications are met drawing masks analyzing masks and checking design rules manufacturing controlling quality 149 .early design rule checking. CIl-Honeywell-BuI I. EAEC. 1. France France 3. -analog and digital. COMPtíTER-AIPEP DESIGN oi digital electronic circuits and syitemi North-Holland Publishing'Company ©ECSC. often missing from present tools. Mlchard3-J. Avenier2-J. as well as those procedures conducive to multiple and complementary levels of verification during the different design steps. automatic mask defect detection). Most prevailing industrial tools reflect the characteristics and constraints of MSI but not those of forthcoming VLSI. LET I-MEA. EEC. predominate in the design of future tools . 1979 VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN J. Mutel 2 1. design simulations. 2. The structure of the paper parallels the usual process that is followed in the development of integrated circuits : a) specification writing. One can note that new criteria. integration of the various tools. Brüssels S Luxembourg. among these criteria are the procedures favoring top-down design. testability analysis). 78150 Le Chesnay. It is attempted to assess the state-of-the-art regarding automated tools available for different levels of verification.G. b) automated specification verification. editor. THOMSON-CSF and IRIA.-C. Domaine de Voluceau. Rault'-J. comparison with specifications. Grenoble. e) test program preparation.-P. Les Clayes-sous-Bois. France This paper addresses the different techniques and associated tools which prevail in the development of digital circuits and are conducive to design verification. d) artwork generation (design rule checking. To cope with this evolution. INTRODUCTI ON The ever increasing complexity and density of LSI and VLSI circuits we have witnessed during the past few years wi'l I soon cause obsolescence of the CADCAM tools and procedures that are in use today. those responsible for computer tools aiding the design and manufacturing of LSI circuits are presently reconsidering the architecture and capabilities of CAD-CAM tools. IC DESIGN STEPS Whatever technology is used. c) design verification after commitment to hardware implementation (hardware description languages. Interrelationships among these tools are also discussed. a set of conditions and relations. corresponds to a sequence of operations performed at each step in the design process. The above approaches lead to three types of analyses : . The paper will be built upon the above sequence of steps . but.93) 3. ambiguities or omissions in the specifications will be reflected in the final product .41. a set of assertions and check that they correspond to the function actually implemented by the circuit (levels Band C ) .80. Basically. omissions. check that they meet the corresponding specifications.derive. or a mask drawing (steps d to i ). The present paper will review each of these procedures while indicating their interrelationships and the characteristics that the corresponding CAD-CAM tools must exhibit.1 The different types of specifications As for any other technical product.derive from the description of the implemented circuit its electrical and logic (even thermal) behaviors and. by analyzing the functional specifications.A second level corresponding to the design itself (steps c to h) C. on the contrary. J. thermal data. MICHARD. Inconsistencies. the quality and reliability of an integrated circuit depend as much on the reliability and quality of Its specifications as on those of its manufacturing process. or assertions.A third level after the final circuit is obtained (step i) 3. this step is decisive in the design of an integrated circuit. J. electrical schematics. In other words.2 Approaches and types of analysis Design verification takes several approaches which may apply to more than one of the above levels : a.1 Levels of verification Verification of IC design is not performed in a single step but. derived by simulation. subsequently. by analyzing the functional specifications.81. RAULT.48.A first level. as the case may be.150 J. the higher the cost for their elimination. 4. analyses during the first design step on a functional schematic (step c) . these analyses concern. . DESIGN VERIFICATION (21. Schematically. 3.61. analyses performed after commitment to hardware . three levels of verification may be distinguished : A. AVENIER.-P. in which it is attempted to detect possible design errors such as inconsistencies. J. MUTEL To each of the above steps correspond verification procedures which are based on CAD-CAM tools. and check for their consistency b. physico-chemical data. once specifications are drawn. redundancies or unnecessary points (step a and b) B. a priori simulation : analyzing specifications before commitment to hardware (steps a and b) .derive. of the circuit as it is implemented. this approach compares results of two simulations of the circuit. WRITING AND VERIFYING SPECIFICATIONS 4. c. the later they are uncovered in the design process. The results are the formal specifications of the circuit's function and the function. each for a same predetermined sequence of input signals. separately or simultaneously.-C. first we will briefly describe design verification. logic schematics. allow for a format comprehensible to the user for whom specifications are intended (potential user or designer of the circuit) . This set of entities forms a model later used as a reference during the other steps in design. 4.the actual implementation (partitioning into main blocks. logic and dynamic behaviors. These specifications concern : . such specifications are the mere expression of the user's needs. domains of use. Here lies the main difficulty of specification writing : provide sufficient information while not attacking the actual physical Implementation of the product. At each level of abstraction only strictly required information should be present. design : it is the area for which automatic validation is beyond all question the most difficult . In short.2 Ideal characteristics for specification writing tools An ideal tool for writing specifications should exhibit the following characteristics : . specifications concern two points : . thermal dissipation.3 Tools for writing and verifying specifications The foregoing indicates that the specifications of a circuit may have several forms which can be put into two main categories : formal and informal speci f¡cations. In fact. I. specifications intended for the potential user. the definition of structures which will confer to the final product those characteristics matching the user's needs as specified initially. provide several levels of abstractions and automatic communication between levels.) . . specifications intended for the circuit designer responsible for circuit manufacturing.VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 151 It is of prime importance to pay as much attention as possible to the way an integrated circuit is specified as well as to the procedures that are used for verifying both the specifications (before actual implementation Is performed) and their respect by the final product. be independent from technology . etc. we will focus on the former point (steps a and b avove) . nature and type of input and output signals. package. the second point involves tools used in the subsequent steps (c to i) where verification is better adapted to automatic tools. one may distinguish : . The specification writing step corresponds to abstract entities whose purpose is defining unambiguously the functions and physical and electrical characteristics of the product. 4. . First. As a rule. input and output signals. avoid omissions and ambiguities . lead to specifications formal enough so that their consistency and completeness may be verified mechanically. the set of Information strictly necessary for the user to use the final product. Let us describe the different kinds of specifications more precisely. I. power).e. geometry.e. .the function to be implemented (logic function or algorithm. behavior under given environmental conditions. implementation. 36. J. the function to be implemented) one derives a structural description representing one of the many possible implementations by means of entities with more basic functions. Various forms favoring automatic verification are possible . 18. as examples one may mention : f low-charts chronograms state tables logic diagrams Boolean functions tables of physical data As a rule. J. several of the above forms may be present in a same set of specifications. 42). their use by industrial IC designers is still limited because of their recent advent and of the complexity of their processing and of the analyses they entail. each with a functional description. The design proceeds in this way as a sequence of synthesis and optimization steps (17. RAULT. should accomodate the successive versions of the implementation from the highest level (algorithms or conceptual schematics) to the lowest level (elementary logic functions) (12. however. As homogeneity is the main expected advantage. VLSI circuits become true computer systems and the selection of a proper architecture Is an important factor in their optimization. it is useful to have at hand three types of tools : . without intermediate transcoding.14. MUTEL Informal specifications Those written in common natural technical language .-P. VALIDATION OF ARCHITECTURE The step corresponding to the selection of circuit architecture is more and more important among the various design steps. Formal specifications Those expressed in a form that provides a technical and mathematical description of the concepts on which the circuit to be designed is based . Initial specifications of integrated circuits are most often written in this way . Concomitant with this design methodology is a circuit description language which. This selection is performed by refining the Initial solution through a sequence of synthesis steps each of them following the same scheme. a set of interconnected modules. This procedure. 33. Among these tools (77) one may mention Petri nets. AVENIER. is stopped when the basic functions used for implementing the initial function can be expressed directly in hardware terms. their verifying proceeds from the engineer's ingenuity. As a matter of fact. SARA (31. 72).-C. MICHARD. Along with this language. they do not lend themselves to mechanical interpretation. one may easily understand that such a language should be compatible from one step to the other.59). much effort Is currently made on the design of languages and tools for writing formal specifications. Starting with a functional description (i. a functional simulator . GRAFCET. for the dialoguing parties use the form that best suits each item of the specifications. J. therefore. To attain both standardization and verification at this level of description. The above list alone shows the difficulty of standardization.152 J. Each structural description is. etc.21. a description processable by a computer program. 5. called top-down design.e. The above tools (description languages and associated simulation programs) well suit the current trend of VLSI's incorporating complex functions such as microprocessors and similar circuits . parallel schemes. in fact. they are those directly relevant to circuit integration . they are an invaluable guide to the designer. shift registers. their main feature is to use technological data such as the geometry of active and passive components and data on the physicochemical process . however. As a matter of fact. an assessment program as a decisional aid for the choices to be made during the partial synthesis steps. By no means. memories. If one analyzes the tools present in current IC design automation systems. simulation is not necessary In principle . in a few cases (PLAs. as a matter of fact. if the designer must create new cells. at each step. it must be verified that constraints associated with the block implementing the given function are met. well before the advent of integration. 14.VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 153 . analog circuit simulators are among the first CAD tools to have been designed and this. VERIFICATION OF LOGIC AND ELECTRICAL IMPLEMENTATIONS Once a structure is chosen and validated. systematic or even automatic synthesis procedures may be used . The designer should be able to establish overall assessment of each of them. whose logic and electrical structures have been defined previously (basic cells specific to each technology). However. According to circuit size. the designer has to gradually define an electrical and logic implementation. assessment programs are virtually non existant. 48. on the other hand.) automation may proceed up to layout. many functional languages and simulators have been designed long before LSI's advent (12. it appears that LSI designers do not have a wide choice among possible programs. This procedure is aided by synthesis and consistency verification programs. their characteristics as different as testability. these languages have experienced limited use . 98) . this is due to their variety and lack of Industrial status. etc. for each function to be implemented.. VLSI designers could benefit from a wealth of existing tools. . Two cases may arise : . Similarly. In spite of an apparent abundance. a consistency verifier which allows checking that each refinement of a part of the description leads to a sub-set equivalent to the implemented function and compatible with the associated constraints . Choice of tools of the two above types is.1 Verification of the electrical implementation Today there are many tools available for this operation . several solutions are possible. ease of implementation and manufacturing. . of course. one may distinguish first two main categories of programs : . The initial data he has at hand is a set of interconnected functions along with a list of basic functions. As a matter of fact. macrofunctions :in this case. 6. and the type of analysis to be performed. 42. however. non standard functions : in this case. . The general use of microprocessor-1 i ke circuits and needs of VLSI causes renewed interest in these languages which should be instrumental in the actual use of structured design for digital systems. dependent upon the technology In use and the accuracy that is requi red. they are most often technology dependent. type of description. 6. In this instance.. are these performances precisely defined as the synthesis is crude . consequently. at least with respect to functional and global performances. In the analysis of their testability and In their functional partitioning. one may note that nearly no system provides a language for true functional description along with a hierarchical functional simulator. on the other hand. he must use an analog circuit simulator for checking electrical and timing characteristics. to check that logic and dynamic specifications are met . silicon area.the programs Intended for component designers . translating functions into logic operators is aided by a simulator allowing. ANP3.96). It is preferable to have a set of complementary programs available.genera I-purpose simulators addressing mainly non-1 i near circuits for transient analyses. Experience gained while using analog circuit simulators shows that a universal program is somewhat utopie . SIMPIL (11). in general. most suited to various conditions such as : . open-circuits. RAULT. microwave or hybrid circuits. etc. Schematically. IMAG-3. TRAC. power supplies. MUTEL . they allow analyses that are not performed. these components are described both by an approximate model and by the numerical values of the parameter of this model. ESOPE. . T-SPICE-2. components outside of their tolerances.etc. bipolar or field effect transistors. influence of non-linearities in active components . minicomputer versions are appearing (10. .30) .simulators dedicated to linear circuits. ASTAP. NASAP.-C. 96) particularly for the second category (32. These programs are less complex than the two previous ones and are less expensive to use.32. filters.AC analysis : small signal response for given DC conditions. SYSCAP. OPNODE. simulators of the first two categories are most often run on large computers. To this category belong SLIC. fault analyses : (shorts. MICE (55). they are. optimization : in general. a given type of circuits : linear or non-linear circuits. transient. for instance. these computations are costly because the mathematical algorithms available assume that the objective functions exhibit only one extremum and are well behaved.simulators specific to a given class of circuits : amplifiers.55. CORNAP. CIRCUS. as the case may be. COMPACT.simulators dedicated to a given technology and handling non-linear circuits with a stress on transient analysis . AVENIER. . Programs of this category are often derived from programs of the first category . etc. J. moreover. etc. J.19.-P. SPLICE (70). with the other simulators (for instance. tolerance analyses) . optimization. . Non-linearities are usually modeled as a set of linear cases around a DC point . MICHARD. ASTEC (46) SUPERSCEPTRE. However.the programs Intended for users of components .94.34. COD. A more accurate categorization may be adopted : . 32). each being.DC state : knowing this state is a prerequisite for other analyses (AC. either through programs of the first category. : MSINC (96). such analyses are of prime importance in assessing IC manufacturing yield and viability of circuits . oscillators. given conditions of simulation : . noise analyses . or through programs used in the validation of the physico-chemical process (see section 8 below). A wealth of such programs are commercially available. The integration technology is not in evidence but is incorporated in the models to be used and validated elsewhere. J.sensitivity and tolerance analyses for the different parameters of a circuit 115. Actual objective functions do not necessa- . a frequency scan is usual . connection with synthesis programs). in practice.transient analysis : response of the circuit to stimuli defined by the user for predetermined initial conditions given by the user or computed elsewhere . PHILPACK.154 J. To this category belong SPICE-2. and. such analyses are useful in preparing testing sequences to be fed to automatic testing equipment . Simulators of the last two categories are run indifferently on large and small computers. etc) . such an analysis is usually beyond the capabilities of conventional simulators . SNAP.55. MOTIS (20. functional analysis : a capability in demand in the case of LSI circuits . . transistors.55. . CCD filters) and simulation is used for verification and optimization a given description mode : . rational functions (linear circuits) mathematical functions empirical formulas tables of data presolved differential equations . active. amplifiers. etc. etc. stand-alone computer.circuits described by the numerical values associated with the model parameters .). one may also distinguish program addressing the following modes of simulation : .VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 155 rily meet those assumptions. gates. symbol I c analyses The two previous modes provide the following advantages : more accurate and compact information. time-sharing.circuits described as an interconnection of functional blocks (filters. simulation taking into account the layout data (capacitive loads. capacitors) . . sem i-numerica I simulators (linear circuits) .system design . avoid some numerical analysis problems.device manufacturing.use of components (discrete or integrated) . analog fault simulators (11. . The functions may be described in varied ways : . coupling between components. . This mode of simulation is useful in the LSI context as dealt with In section 8 below). simulators dedicated to a given LSI technology . resistors. optimization is most often restricted to linear circuits whose analysis is not expensive connection with synthesis programs : the structure of the circuit to be analyzed is provided by a synthesis program which has as input data response specifications to be met. genera I-purpose numerical simulators . reducing computation cost for sensitivity and tolerance analyses .circuits described by symbolic parameters .).20.scattering parameters a given industrial context . large or small computers Taking into account the above considerations. parasitic resistors.70.78. flip-flops.32.96) . Synthesis is restricted almost to linear circuits (passive.circuits described as an Interconnection of basic passive and active components (diodes. logico-analog simulation (timing verification) . delays. Consequently. simulation taking into account thermal mappings (see for instance program T-SPICE . . parasitic capacitors between layers.manufacturing and quality control a given computer environment : remote batch. In order to meet forthcoming needs of LSI. 97). 44. flip-flops. minicomputer implementations (10. 96). The main evolution to be considered as necessary is providing true and extended macro-modeling and macrosimulation capabilities. use of novel simulation techniques that allow simulation speeds closer to the speed of logic simulators while taking into account analog characteristics (gain in the order of 10 ) . . If capabilities of analog simulators currently in use in industrial contexts are assessed. exhaustive. 55.156 J. 25. techniques for selecting proper models (35). 20. 47. timing simulation (see § 6. industry-oriented tools. J. simulations are restricted to small sub-parts of a same circuit (a few dozen of active components) are not global. the designer must choose a logic realization. O. 34. we will restrict our discussion to the evolution considered as necessary for logic simulators to cope with VLSI's constraints. use of macromodeling techniques (gain between 10 and 30) (16.3). 50-53. a new generation of simulators in under way . date back to the eras of discrete components and MSI technology (58). 94. J. presently in use. a unified language for several modes of simulation or several simulators (69). well depicts the diversity in the capabilities expected from analog simulators. automatic connection with other tools especially with drawing aids. In spite the fact that nearly every IC DA system includes one logic simulator and one program for testing sequence generation at least. by no means. AVENIER. 32. computer implementations as well as processing of output results lack flexibility for the user. it is clear that they do not match the constraints of VLSI : simulation running times are prohibitively too long. . MUTEL This brief analysis which is. to the latter mode belong programs for synthesizing testing sequences. 37.). The latter point has never . . . he uses tools for logic simulation. RAULT.-P.-C. and fault analysis (either for a priori assessment of circuit testability or for generation of testing sequences used later on) . In fact. registers. use of intelligent computer terminals so that interactivity and user's comfort are improved. one can note a gap between the characteristics of VLSI circuits and the capabilities of these tools (description modes and analysis levels) which. its main features are : . unfortunately. hiearchical structure : simultaneous simulation of several sub-circuits described at different levels of abstraction (71. As several papers presented at this symposium deal with logic simulation and testing sequence generation. 70. . etc. MICHARD. Possible structures for simulators and simulation techniques are fairly well known today and many efficient programs have been developed. 6. . memories. 84). At this step. have not evolved at the same pace as circuit complexity.2 Verification of the logic schematic Once the functional structure of a circuit is established. Then he translates a functional scheme into a logic diagram described as an interconnection of basic logic elements (gates. each of them being useful in the verification of the electrical schematic of integrated circuits. communications among the various simulation modes are most often manual. 26. . for both purposes of design verification and of preparation of test programs. Finally. deductive and concurrent fault simulations) are likely to be modified accordingly. still little investigated (73.. Then. but rather expressed as symbols . Some preliminary work (2. 65. on the way to eliminate them. this need is more stringent for verifying that specifications are met. gain in computation time : simulation is performed at a functional level and no longer at the gate level. 48. .3 Verification of timing characteristics (13. 56. 74. 44. such faults entail the propagation of incorrect or non-stabilized logic values. results more compact. dynamic parameters of the components and values of a few signals. 49. there Is no proof that simplifying assumptions regarding faults still will hold for the VLSI case . more accurate information on the origin of hazards and races and. 6. . There are several reasons for this situation : First. but worth considering for circuit design verification. 45. . logic simulators seldomly are actually integrated to the other CAD tools (see § 7 ) . conventional fault analysis algorithms (serial. easier to apprehend . parallel. the expected size of VLSI circuits make those methods based on gate-level descriptions obsolete . Besides the points discussed above. no guarantee of exhaust i veness. Inappropriate formats and large volume of results whose processing is therefore cumbersome. 87). 90). 89) has been done addressing the above points . requires checking the maximum clock frequency consistent with the propagation delays. necessary assumptions which do not match real life : for example. Secondly. the methods that work for the MSI case do need a gate-level description. 3. This approach. Experience with conventional logic simulators indicates that users frequently would prefer to be able to deal with data and results not expressed as sequences of Os and Is. The object is to uncover potential delay faults caused by propagation delays falling outside their manufacturing tolerances . result format is similar to the way in which specifications are usually written. 43. simulators capable of handling symbolic data would provide the following advantages : . etc. . unfortunately. . . taking into account the higher combinatorics involved in the corresponding algorithms. along their manufacturing distributions. however. still at the research stage. we would like to mention a novel direction of development. On the contrary. Optimizing dynamic performances of integrated circuits for a given implementation. seems to be promising. no assumptions regarding initialization. initial states of circuits which can not be ascertained. 58. of the components in the circuit.VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 157 received a satisfying solution for the case of MSI and it is recognized that the insufficient approaches taken so far will be of little help for the VLSI case. for it is close to functional techniques and opens avenues to the use of powerful tools for systematic verification such as those for proof of design correctness (I). in particular. therefore. 85. too. their results are not yet made fully available to LSI circuit designers and to those responsible for quality control. Symbolic manipulation corresponds to a novel mode of simulation which conventional logic simulators can not accomodate. they exhibit the following disadvantages : . 92. Structural analysis of the graph derived from the circuit in order to enumerate its internal propagation paths and to determine their respective propagation delays.86. has been implemented by several authors (40. the parameters of the physico-chemical process . a mask description . For example. 87). Timing verification of VLSI circuits is still a problem awaiting a satisfying solution. in the propagation of an input transition. However.95) Schematically. A direct attack of the problem is generally economically unrealistic.it is very likely that a solution is to be sought not in magic algorithms but rather through the modifications of the design procedure so that the initial problem is simplified from beginning. that can be detected on the circuit primary outputs. MICHARD. This approach to verification involves two types of tools : the first one.88. however. for an η-input combinational circuit. a classical one for generating testing sequences. This is a good instance of a situation where the feasibility of a CAD tool is highly dependent on the des ing procedure itself.an elaborate program (62-64) has been developed along these lines.75. 56. VERIFICATION OF LAYOUT (5-7. Due to combinatorial computations. if restrictions are placed on the possible structure of circuits (29). the general case can not be handled this way. determining the maximum propagation delay path may be considered on a practical basis (85).60. J.-C. . One may mention : .79. These parameters are fixed once a technology is chosen and concern manufacturers rather than circuit designers. other approaches involving tools departing from conventional ones have been proposed. 62-64) . oxydation.158 J. however.54. For this reason. Considering the higher combinatorics involved. an exhaustive delay testing would require determining the maximum propagation delay of the circuit for 2 χ (2 -1) input transitions among 2 input combinations (for η = 10 this number is close to 10 ).57. AVENIER. This assumption is verified only partially in practice. etching. MUTEL Such potential faults may be detected through electrical or logic simulations with the tools described above (models most often provide minimum. J. This technique. 7.83. akin to PERT network analysis. IC manufacturing requires two categories of data : . these data are necessary to set the conditions for operations such as diffusion. typical and maximum propagation delays) .23.91. . this information is used during the microlithographic operations specific to the mask machines at hand. Fault propagation sensitizable paths (13.-P.66-68. the second one corresponds to a conventional logic simulator. IC designers would prefer an exhaustive way of testing which the above simulators can not guarantee. . J.22. RAULT.27. etc.9. The problems involved in the industrial use of such simulators do not lie in the optimization of the simulation algorithms but in the preparation of model libraries including this probabilistic treatment. Here It is implicitly assumed that a delay fault originating in a block located on a sensitizable path causes a delay. Recently. Use of a simulator for which the logic levels are no longer considered as Boolean values but as stochastic variables taking values between 0 and 1 according to a given or computed probability. 49. provides a list of sensitizable paths and criteria of selection among this list . it has experienced limited use. . Use of these simulators leads to two additional verifications : . inconsistencies between the topology of the electrical and logic schematics derived from the actually implemented mask and the ideal schematics specified initially. verification of proper electrical or logic use of the components (load conditions. such identifiers are not generally available for their introduction is cumbersome and prone to error. with respect to electrical and logic responses. may be analyzed with the conventional simulators used in the early steps of design. the analysis requires complex algorithms based on sort. into a set of actual masks. diodes. Those algorithms are not generally applicable but are most often specific to a given technology. may be followed by computation of the numerical values of their parameters The electrical or logic schematics. Detection of all these faults is based on various mask analyses and on a detailed modeling of basic passive and active components. their connecting pads and the electrical nodes (equipotentials). During the analysis above. several types of errors or flaws may be introduced : . introduction of unacceptable parasitic components entailed by the physical implementation and not apparent in the ideal electrical schematic. recognition of basic components (resistors. This list is subsequently compared to the corresponding lists drawn initially for simulation purposes during the preceding steps in the design. cells. If mask descriptions would include identifiers for all the components.VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 159 This section deals with the verification of the first category of data which draws on the various CAD tools . etc ) . in other words. once an IC mask is drawn. unfortunately.1 Verification of the logic and electrical schematics The problem here is to verify that the circuit derived from the descriptions of the masks (the circuit to be actually implemented) is equivalent. Consequently. etc ) and interconnections.Analysis of the topological structure. subsequently. we shall deal with verification of the second category of data in the next section. During the various steps involved in the translation of a functional diagram into both a logic schematic and an electrical schematic and. gates. 7. . this analysis would be straightforward . . the other at the origin of the mask and representing the initial specifications. a list of components with their interconnections is drawn.Analysis of electrical and logic responses. transistors. In the end. to the circuit devised initially by the electronic designer . then fully documented. an analysis of the descriptions is sufficient without resorting to an actual simulation. the electrical and logic circuits have each two descriptions : one obtained from the mask. overlooking admitted tolerances for the drawing and determined by the microIithographic process. non-optimal choice or accidental errors regarding sizes of components with respect to their expected electrical and dynamic characteristics. capacitors. fan-out. merge and selection operations on the files storing the descriptions of the different mask levels. By recognition algorithms one determines characteristic patterns or combination of characteristic patterns identifying components (diodes. . b. fan-in. . Checking these two pairs of descriptions against each other allows two other verifications : a. transistors) of the schematic or of parasitics. one may mention the following programs : VETO (57).160 J. As a matter of fact. 8. several thousand vectors . electrical conditions. consequently. J. 66. one can note that most algorithms do not adequately match needs of LSI (for bipolar technology : a few thousand of components.) defining the simulations. However. custom designers. Generally. is simulated for various combinations of values of the parameters in the model (8).). several hundred thousand vectors per mask). a large number of simulation iterations. CMAT (75). Besides the fact that each simulation is costly regarding the required level of accuracy. no oblique or circular arc edges.-P. As examples. selection of the parameters of the physico-chemical process determines the characteristics of the basic components. This second type of automated analysis is not provided by every IC design automation system. CAD tools have a role in the verification of manufacturing conditions. etc . Of course. The corresponding operations are complex for they must take into account various situations and. the quality of the verification will be that of the set of input stimuli (signal.2 Verification of microlithographic constraints The inaccuracies concomitant with microl¡thographic operations must stay within tolerances determined by the physico-chemical operations corresponding to each geometrical shape in the masks (minimum diffusion width. they entail combinatorics. 10 levels of masks. MUTEL . Basically. . 6 to 7 levels of masks. or interactively by aiding visual inspection with display tricks (superposing colors (54) for those DA systems with color displays (54. moreover.-C. limited number of edges in a polygon. An optimal choice requires solving a statistical modeling problem. logic and electric. for reaching an acceptable level of confidence. these constraints are only partially met . most IC design automation systems provide some sort of verification of this kind. RAULT. 91)). For this reason. J. Checking the results of two additional simulations.24. this approach requires. MASOB (88). AVENIER. TOPOL (CII-HB). Their use requires important computer resources (hours of CPU time). As a first approach. against the specifications may serve two purposes. J. etc . etc. 7. during mask drawing.7 describe the capabilities of 20 IC DA systems as well as discussion on several algorithms. ) . for MOS technology : a few tens of thousand components. for random parameter combinations lead to unrealistic conditions. In fact. As for every Monte Carlo technique. This verification concerns those responsible of technology more than circuit designers and. a global verification is required after the final mask is drawn. conventional Monte Carlo techniques have been used . Generally speaking. 7. However.38. modeled according to the physico-chemical process in use.3 Present status and needs Regarding the verifications described in the two preceding paragraphs. there the same component.82). MICHARD. VERIFICATION OF MANUFACTUGING DATA (4. restictions are often placed on shapes (for instance. one of the factors in optimization of manufacturing yield is the minimization of the sensitivity of the electrical parameters of components to fluctuations in the control of the physico-chemical process. the analysis consists in verifying overlappings or minimum spacings between basic figures of a same mask or of different mask levels. particularly.28. or tuning the circuit with respect to parasitic effects introduced by the physical implementation. verification that specifications are met. the results are not very satisfying. either uncovering design errors undetectable by simulation at the early design steps. minimum spacing between oxyde grid and metal contact. references 6. either automatically by means of costly algorithms. a second type of approach has been followed . The independent parameters are determined by means of a statistical analysis of experimental samples. The sequence of the different analyses is schematized as follows : description of the physico-chemical process d if fusion prof i les parameters of the component model sensitivities I optimization I extreme performances correlations statistical distributions sensitivity to fluctuations in the component parameters nominal values component designer circuit designer By using these programs. Several such programs addressing different technologies have been developed : . SUPREME (4. In practice. SITCAP (24). from physico-chemical data and with a good accuracy. while taking into account their correlations with the first ones. 38). a good prediction of diffusion profiles . two analysis programs are used concurrently : . . the circuit designer. may find here a powerful tool for validating component models and parameter values used in analog simulation. In practice. from results obtained with the first one and from geometrical data. BIPOLE (82). one looks for a set of independent parameters from which the remaining parameters are computed. the first one provides. the results obtained with them are considered as satisfying. the second one provides.VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 161 For several years. especially if he is close to the component manufacturer. the nominal values and the distributions of the electrical parameters of a given model for components. This type of analysis. J. Several important facts must be mentioned : . still at the research level. overall performances. electrical and timing simulations. one may wonder what will be the perenniality of current CAD tools with respect to ¡mmi nent VLSI. such a capability corresponds to the development of "hybrid" simulators allowing simulations circuits where several sub-circuits are described at different levels : functional. and electrical). Capabilities for symbolic analysis at different levels of description (functional logical.-P. Analyses that are performed are microscopic rather than macroscopic. one may note several trends : . For several analyses. Is particularly favorable to exhaustive verification and to the use of techniques for proof of design correctness. testability analysis. functional. AVENIER. RAULT. . some analyses can not be performed economically. Integration of the different CAD tools in a same consistent set favoring a design procedure allowing global feedbacks from one design step to another. it is demonstrated that their complexity. and electric). J. etc. . . logic. . J. . graphical. logical electrical. one has to be satisfied with heuristics only. makes it illusory to search for magic algorithms .-C. MUTEL 9. changes in structure and implementation require questioning decisions made at earlier steps in the development process. is it realistic to force designers to simultaneously apprehend all the aspects of the design. MICHARD. while keeping a global view of all the other steps. are most often difficult and little automated. in the mathematical meaning. In practice. Communications among the various analyses that must be coordinated in order to guarantee exhaustiveness of design verification. selection criteria are not well formalized nor known to users. temporal.162 J. . Capabilities for macro-macromodelI¡ng and macro-simulation at the different levels of description (functional. thermal. In the quest for a better adequation of CAD tools to forseeable needs of VLSI. Allowing for different modes of description and simulation for a same analysis. logical. Due to their combinatorial nature. unless either new design procedures simplifying problems at the start are adopted. . in fact. . . or restrictions with respect to exhaustiveness or accuracy are accepted. Automatic communication among the different tools by means of a central data base (14) accessible at each design step and provided with means for automatic translation among the different levels of description of a same circuit. the optimal design of complex circuits requires making choices during the early design phases. . Program form and method of use lack versatility to the point of deterring potential users. . For Instance. By no means. Hierarchical structure of analysis tools : verification of specifications. even realistically. CONCLUSION After this brief survey on the different features of LSI design verification. With respect to their structures and capabilities present CAD tools patently are better adapted to a linear sequence of the steps in IC design . Available tools are rather various and ill-matched . g. Better expl¡citation of criteria for selection among the various possible CAD tools.VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 163 . Improving flexibility and accessibility of tools through general use of minicomputers and intelligent terminals . Simplifying analyses by modifying design procedures at inception so as to avoid redhibitory combinatorics (e. . symbolic layout). symbolic simulation. . analysis algorithms have to be modified according to this context. . It is only at the price of such improvement that VLSI needs will be met . as witnessed by the annexed bibliography of recent works. work in these directions is already well In progress. 164 J.-C. RAULT, J.-P. AVENIER, J. MICHARD, 0. MUTEL References 111 121 131 141 I5I 161 171 181 I9I 1101 1111 1121 1131 1141 1151 I16I 1171 I18I 1191 1201 I21I 1221 S.K. Abdali (1971) : On proving sequential machine designs, IEEE Transactions on Computers, vol. C-20, n° 4, December 1971, pp. 1563-1566 M. Abramovici, M. A. Breuer, and K. Kumar (1977) : Concurrent fault simulation and functional level modeling, Proceedings of the 14th Design Automation Conference, June 1977, pp. 128-137 G. Alia, P. Ciompi, and E. Martinelli (1978) : LSI components modelling in a three-valued functional simulation. Proceedings of the 15th Design Automation Conference, June 1978, pp. 428-438 D. Antoniadis, S.E. Hansen, and R.W. Dutton (1978) : SUPREM II - a program for IC process modeling and simulation. Technical Report n° 5019-2, Integrated Circuits Laboratory, Stanford University, June 1978 H.S. Baird and Y.E. Cho (1975) : An artwork design verification program (ARTCON), Proceedings of the 12th Design Automation Conference, June 1975, pp. 414-420 H.S. Baird (1977) : A survey of computer aids for IC mask artwork verification Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, Apri I 1977, pp. 441-445 H.S. Baird (1978) : Fast algorithms for LSI artwork analysis. Journal of Design Automation and Fault-Tolerant Computing, vol. 2, n° 2, pp. 179-209, May 1978 P. Balaban and J. Golembeski (1975) : Statistical analysis for practical circuit design, IEEE Transactions on Circuits and Systems, vol. CAS-22, n° 2, February 1975, pp. 100-108 J.-C. Bertails and J. Zirphile (1977) : A standardized approach for the reduction of LSI design time and automatic rule checking, IEEE Journal of Sol idState Circuits, vol. SC-12, n° 4, pp. 433-436, August 1977 B.L. Biehl (1978) : Machine independent minicomputer circuit simulation, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 886-887 G.R. Boyle (1978) : SIMPIL-a simulation program for injection logic. Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 890-894 M.A. Breuer (1972) : Design automation of digital systems : vol. 1 : Theory and techniques. Prentice Hall 1972 M.A. Breuer (1974) : The effects of races, delays, and delay faults on test generation, IEEE Transactions on Computers, vol. C-23, n° 10, pp. 1078-1092, October 1974 M.A. Breuer (1975) : Digital system design automation : Languages, simulation and data bases. Computer Science Press Inc., Woodland Hills, California E.M. Butler, E. Cohen, M.J. Elias, J.J. Golembeski, and R.G. Olsen (1977) : CAPITOL-clrcuit analysis program including tolerancing. Proceedings of the 1977 IEEE Symposium on Circuits and Systems, pp. 570-574 E.M. Butler (1977) : Macromodels for switches and logic gates in circuit simulation, Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 692-695 H.D. Caplener and J.A. Janku (1973) : Improving modeling of computer hardware systems, Computer Design, vol. 12, n° 8, pp. 59-64, August 1973 H.D. Caplener and J.A. Janku (1974) : Top-down approach to LSI system design, Computer Design, August 1974, pp. 143-148 F.Y. Chang (1978) : Pseudo statistical analysis of LSI design, Digest of the IEEE Solid-State Circuit Conference, February 1978 B.R. Chawla, H.K. Gummel, and P. Kozak (1975) : MOTIS-an MOS timing simulator, IEEE Transactions on Circuits and Systems, vol. CAS-22, n° 12, December 1975, pp. 901-910 R.C. Chen and J.E. Coffman (1978) : MULTI-SIM-a dynamic multi-level simulator Proceedings of the 15th Design Automation Conference, June 1978, pp. 386-391 B.J. Crawford (1975) : Design rule checking for integrated circuits using graphical operators (program DRC), Proceedings of the Second Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 75, June 75, pp. 168-176 VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 165 1231 B.J. Crawford, D.R. Clark, A.G. Heninger, and R.S. Clary (1978) : Computer verification of large scale integrated circuit masks, COMPCON SPRING 1978, pp. 132-135 1241 H.J. De Man and R. Mertens (1973) : SITCAP-a simulator of bipolar transistors for computer aided circuit analysis programs, ISSCC Digest of Technical Papers, February 1973, pp. 104, 105, 205 1251 H. De Man (1977) : Adequacy of models to simulation programs and introduction to macromodeling, Journées d'Electronique on Modeling Semiconductor Devices, Lausanne, Switzerland, October 18-20, 1977 1261 H. De Man (1977) : The use of Boolean controlled elements for macro-modeling of digital circuits, Journées d'Electronique on Modeling Semiconductor Devices, Lausanne, Switzerland, October 18-20, 1977, also Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 522-526 I27I I. Dobes and R. Byrd (1976) : The automatic recognition of silicon gate transistor geometries - an LSI design aid program, Proceedings of the 13th Design Automation Conference, June 1976, pp. 327-335 1281 R.W. Dutton et al. (1977) : Correlation of fabrication process and electrical device parameter variations, IEEE Journal of Sol id-State Circuit, vol. SC-12, n° 4, August 1977, pp. 349-355 I29I E.B. Eicheberger and T.W. Williams (1977) : A logic design structure for LSI testability, Proceedings of the 14th Design Automation Conference, June 1977 pp. 462-468 1301 N.J. Elias (1975) : A tolerancing program for practical circuit design. Digest of the 1975 IEEE International Solid State Circuit Conference 1311 G. Estrin (Ί977) : Modeling for synthesis-the gap between intent and behavior Proceedings of the Symposium on Design Automation and Microprocessors, February 24-25, 1977, IEEE Publication 77 CH1189-0C, pp. 54-59 1321 S.P. Fan, M.Y. Hsueh, A.R. Newton, and D.O. Pederson (1977) : M0TIS-C- a new circuit simulator for MOS LSI circuits, Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, pp. 700-703 1331 R.S. Fenchel (1977) : SARA user's manual (System ARchitects Apprentice), Computer Science Department, University of California, Los Angeles, California, January 1977 1341 J. Fong and C. Pottle (1977) : Simulation of parallel microcomputer system for circuit des'ign. Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 131-134 1351 D.L. Fraser and S.W. Director (1978) : Model selection for computer simulation of digital M0SFET LSI circuits. Electronic Circuits and Systems, vol. 2, n° 2 March 1978, pp. 39-46 I36I R.I. Gardner (1977) : Multi-level modeling in SARA, Proceedings of the Symposium on Design Automation and Microprocessors, February 24-25, 1977, IEEE Publication 77 CH1189-OC, pp. 63-66 I37I M. Glesner (1978) : New macromode I I Ing approaches for the simulation of large scale integrated circuits. Proceedings ECCTD, September 1978 I38I A.G. Gonzalez, S.R. Combs, R.W. Gill, and R.W. Dutton (1975) : Fabrication process modeling applied to IC NPN transistors using a minicomputer, in Proc. of the Int. Electron. Conv., Sydney, Australia, Paper D-2568, August 25-29,1975 I39I P.R. Gray and R.G. Meyer : Analysis and design or analog integrated circuits, J. Wiley 1401 J.W. Grundman and S.C. Bass (1978) : Probabilistic analysis of digital networks, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 527-531 I41I H. Hal I¡well and J.P. Roth (1974) : System for computer design, IBM Technical Disclosure Bui letin, vol. 17, pp. 1517-1519, 1974 1421 R.W. Hartenstein (1977) : Fundamentals of structured hardware design - A design language approach at register transfer level. North Holland 1977 I43I R.A. Harrison and D.J. Olson (1971) : Race analysis of digital systems without logic simulation. Proceedings of the 8th Design Automation Workshop, pp. 82-94, June 1971 166 J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL 1441 R.B. Hayter, P.S. Wilcox, H. Rombeek, and D.M. Caughey (1978) : Standard cell macromodels for logic simulation of custom LSI, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 1108-1112 I45I E.L. Hepler and C A . Papachristou (1977) : A logic simulator for MSI, LSI, and microcomputer systems. Proceedings of the 1977 IEEE Conference on microcomputers, pp. 220-226 I46I M.H. Heydemann (1977) : A general purpose circuit simulator efficient through sparse tableau and input processing. Proceedings of the 1977 IEEE Symposium on Circuits and Systems, pp. 118-121 1471 M.H. Heydemann (1978) : Functional macromodeling of electrical circuits, Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 532-535 1481 H. Hoehne and R. Piloty (1975) : Design verification at the register transfer language level, IEEE Transactions on Computers, vol. C-24, n° 9, September 1975, pp. 861-867 1491 E.P. Hsieh, R.A. Rasmussen, L.J. Vidunas, and W.T. Davis (1977) : Delay test generation. Proceedings of the 14th Design Automation Conference, New Orleans, June 20-22, 1977, pp. 486-491 1501 H.Y. Hsieh and N.B. Rabbat (1977) : Computer-aided design of large networks by macromodular and latent techniques, Proceedings of the 1977 IEEE Symposium on Circuits and Systems, pp. 688-691 1511 H.Y. Hsieh, N.B. Rabbat, and A.E. Ruehll (1978) : Macromodeling and macrosimulation Techniques, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 336-339 1521 M.Y. Hsueh and D.O. Pederson (1977) : An improved circuit approach for macromodeling digital circuits, Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, pp. 696-699 1531 M.Y. Hsueh, A.R. Newton, and D.O. Pederson (1978) : The development of macromodels for MOS timing simulators, Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 345-349 1541 B. Infante, D. Bracken, B. Me Calla, S. Yamashoki, and E. Cohen (1978) : An interactive graphics system for the design of integrated circuits, Proceedings of the 15th Design Automation Conference, June 1978, pp. 182-187 1551 L.C. Jensen and D.O. Pederson (1978) : MICE - a minicomputer integraded circuit emulator, 1978 European Conference on Circuit Theory and Design, Lausanne, Switzerland, September 4-8, 1978 I56I I.I. Kirkpatrick and N.R. Clark (1966) : PERT as an aid to logic design, IBM Journal of Research and Development, vol. 10, n° 2, pp. 135-141, March 1966 1571 J. Lecarpentier (1975) : Computer-aided synthesis of an IC electrical diagram from mask data. Digest of the 1975 IEEE International Sol id-State Conference, pp. 84-85 1581 Y.H. Levendel and W.C. Schwartz (1978) : Impact of LSI on logic simulation, COMPSOC Spring 1978, pp. 102-119 1591 D. Lewin (1977) : Computer-aided design of digital systems, Crane Russak, New York I60I B.W. Lindsay and B.T. Preas (1976) : Design rule checking and analysis of IC mask designs. Proceedings of the 13th Annual Design Automation Conference, June 1976, pp. 301-308 1611 P. Losleben (1975) : Design Validation in hierarchical systems, Proceedings of the 12th Design Automation Conference, Boston, June 1975, pp. 431-438 1621 B. Magnhagen (1976) : A high performance logic simulator for design verification, Proceedings of the 1976 Summer Computer Simulation Conference, July 1976, pp. 724-726 1631 B. Magnhagen (1977) : Practical experiences from signal probability simulation of digital designs, Proceedings of the 14th Design Automation Conference, pp. 216-219, June 1977 1641 B. Magnhagen (1977) : Probability-based verification of time margins in digital designs, Linköping Studies in Science and Technology - Dissertations n° 17, Linköping University, Sweden, September 1977 VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN 167 1651 M. Malek and A.K. Bose (1978) : Functional simulation and fault diagnosis, Proceedings of the 15th Design Automation Conference, June 1978, pp. 340-346 I66I J. Michard, X.H. N'Guyen, and P. Zamansky (1977) : VISTA - un système d'aide au tracé de circuits intégrés. International Conference on Microlithography, Paris, June 21-24, 1977 I67I C L . Mitchell and J.M. Gould (1974) : MAP - a user-controlled automated mask analysis program, Proceedings of the 11th Design Automation Workshop, June 1974, pp. 107-118 I68I C L . Mitchell (1975) : MAP - Mask Analysis Program, M & S Computing Inc, Report N76-17855, October 21, 1975 I69I A.R. Newton, J.D. Crawford, and D.O. Pederson (1977) : A proposal for a unified input syntax for CAD Programs, University of California at Berkeley, October 14, 1977 I70I A.R. Newton and D.O. Pederson (1978) : A simulation program with large-scale integrated circuit emphasis. Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 1-4 1711 A.R. Newton and D.O. Pederson (1978) : Hybrid simulation for LSI design, ELECTRO 78 1721 W.T. Overman and G. Estrin (1977) : Developing a SARA building block - the 8080, Proceedings of the Symposium on Design Automation and Microprocessors, Feb. 24-25, 1977, IEEE Publication 77 CH II 89-OC, pp. 77-86 1731 M. Perkowski (1978) : The state-space approach to the design of multipurpose problem solver for logic design, IFIP Working Conference on "Artificial Intelligence on Pattern Recognition in Computer-Aided Design", GrenobIe,March 1978, to appear (North Holland) 1741 D.S. Pilling and H.B. Sun (1973) : Computer-aided prediction of delay in LSI logic systems. Proceedings of the 10th Design Automation Workshop, June 1973, pp. 182-186 1751 B.T. Preas, B.W. Lindsay and C W . Gwyn (1976) : Automatic circuit analysis based on mask Information, Proceedings of the 13th Annual Design Automation Conference, June 1976, pp. 309-317 1761 Y. Purl (1977) : A Monte Carlo based circuit-level methodology for algorithmic design of MOS LSI static random logic circuits, IEEE Journal of Sol id-State Circuits, vol. SC-12, n° 5, October 1977, pp. 560-565 I77I J.-C Rault (1978) : A bibliography on the logical simulation of digital systems, THOMSON-CSF Internal Report (600 entries) 1781 H. Rombeek and R.E. Thomas (1975) : Electrical simulation of LSI cellular components. Proceedings of the 1975 IEEE Electrical Engineering Conference, Canada. (Program NANS IM) I79I L.M. Rosenberg and C. Benbassat (1974) : CRITIC - An integrated circuit design rule checking program. Proceedings of the 11th Design Automation Conference, June 1976, pp. 14-18 1801 J.P. Roth (1977) : Hardware verification, IEEE Transactions on Computers, vol. C-26, n° 12, December 1977, pp. 1292-1294 I81I J.P. Roth (1973) : VERIFY (a design verifier), IBM Technical Disclosure Bulletin, vol. 15, n° 8, January 1973, pp. 149-151 I82I D.J. Roulston, S.G. Chamberlain, and J. SehgaI (1972) : Simplified computeraided analysis of double diffused transistors including two dimensional high level effects, IEEE Transactions on Electron Devices, vol. ED-19, pp. 809-820, June 1972 1831 G. Russel (1978) : Automatic mask function checking of LSI circuits, Proceedings of CAD 78, Brighton, Sussex, England, March 1978, pp. 182-194 I84I H. Schlchman (1978) : A multilevel simulation strategy, ELECTRO 78 I85I J.J. Shedletsky (1978) : Delay testing LSI logic, IEEE International FaultTolerant Computing Symposium, June 20-22, 1978, Toulouse, pp. 159-164 1861 J.D. Stauffer (1978) : LCL - a compiler and language for logical mask checking, SANDIA Corp. Report SAND 77-2031, March 1978 1871 T.M. Storey and J.W. Barry (1977) : Delay test simulation, Proceedings of the 14th Design Automation Conference, New Orleans, June 20-22, 1977, pp. 492-494 168 J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL 1881 L. Szanto (1978) : Network recognition of an MOS integrated circuit from the topography of its masks, Computer Aided Design, vol. 10, n° 2, pp. 136-140, March 1978 1891 M. Tokoro, M. Sato, M. Ishigami, E. Tamura, T. Ishimitsu, and H. Ohara (1978): A module level simulation technique for systems composed of LSI's and MSI's, Proceedings of the 15th Design Automation Conference, June 1978, pp. 418-427 1901 T.J. Wagner (1977) : Hardware verification. Ph. D. Dissertation, Report n° STAN-CS-77.632 and n° AIM-304, Computer Science Department, Stanford University, Stanford, California, September 1977, also report AD-A048684/SGA, September 1977 I91I N. Weste (1978) : A color graphics system for IC mask design and analysis. Proceedings of the 15th Design Automation Conference, June 1978, pp. 199-205 I92I P. Wilcox, H. Rombeek, and D.M. Caughey (1978) : Design rule verification based on one dimensional scans, Proceedings of the 15th Design Automation Conference, June 1978, pp. 285-289 1931 M.A. Wold (1978) : Design verification and performance analysis, Proceedings of the 15th Design Automation Conference, June 1978, pp. 264-270 1941 Y.-M. Wong and C Pottle (1976) : Adaptation of circuit-simulation algorithms to a simple parallel microcomputer structure, Electronic Circuits and Systems, vol. 1, n° 1, pp. 27-32, 1976 I95I M. Yamin (1972) : XYT0LR - a computer program for integrated circuit mask design checkout, Bell System Technical Journal, vol. 51, pp. 1581-1593, 1972 I96I T.K. Young and R.W. Dutton (1976) : Mini-MSINC- a minicomputer simulator for MOS circuits with modular built-in models, IEEE Journal of Sol id-State Circuits, vol. SC-11, n° 5, pp. 730-732, October 1976 I97I T.K. Young, L.K. Scheffer, D.B. Estreich, and R.W. Dutton (1978) : Macromodeling of IC structures, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 340-344 1981 IEEE Computer Magazine : Special issues on computer hardware description languages Vol. 7, n° 12, December 1974 Vol. 10, n° 6, June 1977 G. Uusgraue, editor, COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Comapny O ECSC, EEC, EAEC, Brussels S Luxembourg, 1979 COMPUTER AIDED DESIGN THE PROBLEM OF THE 80'S MICROPROCESSOR DESIGN Bill Lattin Intel Corporation Aloha, Oregon The rapid evolution of semiconductor technology continues to make possible very sophisticated electronic systems on a single silicon chip. At present projections, in 1982 a single silicon chip may have over 100,000 transistors. The problem that this technology evolution presents is how to design, layout and check this level of complexity. Unless there is a major break through in Computer Aided Design, this level of complexity will go unused in that the accurate design of 100,000 transistor chips would take 60 man years of layout and 60 man years of checking. At present design rates, it is clear that the major problem for the 1980's is to devise new layout and checking CAD tools so that the semiconductor technology with all its density will be usable by the electronic community. The rapid evolution of semiconductor technology is the major force which is motivating microprocessor manufacturers to take a second look at their internal design methods. The technology has increased the complexity on a chip by a factor of 4 in the last two years, but the design methods have not changed in the last six or seven years. This means that it now takes more and more of a manufacturer's resources to design each chip. In addition to the amount of resources, the actual time to design, debug and transfer a complex microprocessor to production has increased at the same rate as the chip complexity. The challenge for manufacturers of LSI devices in the future is how to reduce the design cost of the product and how to reduce the actual time from design to volume production. This paper will focus on just one element of the design cycle -- "Layout". This J^ the most costly portion of the design cycle as well as the most error prone. Figure 1 shows the historical density improvement for microprocessor technology. The complexity of microprocessors at the chip level has grown exponently for the last few years. By using the number of active transistors on a chip as a general parameter of complexity and plotting it against the year of introduction of that microprocessor, one can get a glimpse into the future. This view indicates that the largest component of the design cycle will remain layout. It could even be stated that layout will become an increasing portion of the cost and possibly become the limiting factor. 169 but large complex microprocessors will have been limited by the layout portion of the design cycle. LATTIN Figure 1 DEVICES PER CHIP VLSI .MOS TECHNOLOGY 10. A wide variety of layout techniques fit within this range of productivity. a complex microprocessor in 1982 will take 60 man years to layout. Figure 2 is then taken from Figure 1 by using the number of transistors that the technology will provide and translating into man years of layout effort.000K __ YEAR OF INTRODUCTION At the present time. check and redraw. What this means is that the technology will have outrun the manufacturer's ability to use it -. . That is not to say that the technology will go unused since increasingly dense memory chips can and will make use of this technology. This includes the time to draw. such as interactive graphics or manual draw and digitize.170 Β. With this level of productivity. assuming each layout designer can achieve a productivity of 10 transistors per day. the productivity of an average layout designer is between 5 to 10 devices per day.at least for complex systems design. DRAWING. CHANGING AND CHECKING OF RANDOM LOGIC 60.CAD : THE PROBLEM OF THE 80'S MICROPROCESSOR DESIGN 171 Figure 2 MAN YEARS FOR PLANNING. 40. 30. 72 74 YEAR The solution to limitation will depend on the microprocessor manufacturer's ability to alter his design methods and develop CAD tools to increase layout productivity to keep pace with the rapid evolution of semiconductor technology. 10. 50. 20. . . it became clear that the digital test capabilities at LITEF would not match our requirements for the near future (3 years). EEC. 173 . The goals to be achieved with the introduction of the Texas Instrument ATS 96O were the following: 1. editor. at LITEF. k. Over and above the question of in house requirements was the question of which direction was the industry as a whole taking. The test program simulators and or generators were either too large or too expensive to be considered for LITEF at that point in time. At that point in time it was clear. COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Company © ECSC. Uusgraut. Improved product quality Increased testing capacity Improved repeatability of results Reduced skill levels Management information gathered. 2. EAEC. 3. 1979 USER EXPERIENCE IN SIMULATION AND TESTING C. 5. Freiburg. The basic question had to be resolved make or buy and if buy. It was further clear that the generation of a high quality (95$) test program from hand was not economically feasible. what. Our solution to the problem of testing digital logic in this time frame was the purchase of a commercially available automatic test system which used an adapted subset of ATLAS as a programming language. due to increased usage of digital logic. LITEF did a study of the market to determine how to meet these requirements. Gaskin.G. Brussels S Luxembourg. Litef. however the hardware to software cost was at least 1:5 if not 1:9. that the hardware was important. West Germany In early spring of 197 2 . so the importance was placed on software with a firm requirement for a subset of ATLAS at the least. 174 C.S. manweeks) Very costly engineering change order (ECO) . The major problems were: 1. 2.S. quality control acceptance. The problems incurred with this solution were for long term solution not acceptable and it was clear that LITEF required an in house capability of automatic test program generation.500) Dollars U.S. Over and above this it required approximately four (h) to six (6) manweeks of effort for the checkout. per program or 125. LITEF purchased fifty (50) test program sets from Texas Instruments in Dallas and Pacific Applied Systems in California over the course of the next 3 years. Schedules dictated by other companies Costly in house effort required (k-6 to test programs Turn around times for ECO's expensive Test program quality not verified Inaccurate hardware documentation. This was in fact a good arrangement as the costs were reasonable. total purchase price. an average price of two thousand five hundred (2. 5. 3. These test programs covered 95$ of all SA1 and SAO IC pin faults with the average board complexity of 50 IC's (25OO nands) with about 150 input output pins. k. GASKIN These goals could only be met as the result of a total system approach greatly dependent on the software used. Due to the high quality requirements of the test programs. and documentation control effort in house per program. we had to purchase this service from the U. industry. which could only be evaluated by a computer simulation. 6.000 S U. There is a fixed cost and a variable cost associated with ECO both in house and out of house. Tasks Change Definition Cost estimate and Handling Engineering evaluation of change Model change Pattern generation and resimulation Program checkout Quality control and documentation control k-6 X (1 wks) wks average 1 wk X (2-^ wks) X X (2 day) X (1 day) X (2 day) Fixed cost Variable cost X 175 Figure 1. This coupled with the fact that the turn around time for out of house ECO's is approximately the same as for new generation defines a major problem associated with utilizing an out-of-house service for test program generation. The high cost of executing ECO's (see Figure 2) plus the fact that in the normal life cycle of a new design LITEF averages four (h) ECO's per board type results in the maintenance cost of programs being as large if not larger than the original cost.USER EXPERIENCE IN SIMULATION AND TESTING Most of these problems are self explanatory however I would like to point out a number of possibly not so apparent problems in conjunction with ECO's. . A list of these costs (Figure 1) makes it apparent why this situation exists. The sad but true situation is the fixed cost in 90$ of all the instances is much higher than the variable cost. 176 C. GASKIN OUT-OF-HOUSE VS IN-HOUSE LIFE CYCLE COST 50 Test Programs Purchase Price In house effort Out-of-house Life cycle cost per program SMC 3103 purchase price (5 yr write off) Test engineers In-house life cycle cost per program 10,000 $ US Figure 2. 50,000 $ US 25O man weeks ^00 man weeks 17,000 $ US Original cost 125,000 $ US 25O man weeks ECO cost 150,000 $ US 1000 man weeks These problems plus the new integrated circuit technology required that we have an in house capability of Automatic Test Program Generation. LITEF has in the past four that are on the world market. One of the important discoveries we made was that it was less a question of cost but more a question of capabilities that determined the choice. (k) years investigated the major commercially available systems USER EXPERIENCE IN SIMULATION A N D TESTING The p r i m e f a c t o r s 1. 2. 3. k. 5. for d e t e r m i n a t a t i o n were as follows: 177 Highly automatic test program generation Ease of handling ECO's Defined quality of Test Programs Fast turn around times Cost reduction Naturally the system must demonstrate that it meets the basic requirements to be considered. After two years experience using the system, we selected a SMC 3103 with D-LASAR Software from Scientific Machine Corp. of Dallas, Texas. The following goals have been achieved : 1. 3. 3. k. 5. 6. 7. 8. Highly automatic test program generation Very easy handling of ECO's Excellent definition of program quality Extremely short turn around Very large reduction in costs Design verification Improved documentation Automatic schematic drawing As you can see we have achieved more than originally planned. Over and above this we have a greater level of achievement in each area than originally planned. The last three points have allowed a new organization which reduces the work load in the ATPG area and at the same time improves the quality of the design thus reducing hardware integration time. This new capability has made way for the new organization flow as shown in Figure 3. 178 C. GASKIN FUNCTIONAL Design Inputs TECO Deck FLOW ORGANIZATION , Engineer Sketch LASAR Model DESIGN D LA S A R VERIFICATION Predicted Responses Timing Info L NO/< Λ ood \ ? NO ' ATPG Drafting <^95<ìΝ ύ / N O Hardware Automatic Test System Finished Product FIGURE 3 USER EXPERIENCE IN SIMULATION AND TESTING D-LASAR is used in two separate modes to help the designer verify the logic and the timing of his design before it becomes hardware. The first mode allows the designer to specify input patterns he expects to encounter and then D-LASAR shows him the logic response.This is in the form of 179 a timing diagram which is easy for the engineer to interpret and design changes at this point are quick and painless. Once this mode has been successfully completed then D-LASAR sélectes it's own inputs to detect 95$ of all defined fault classes and by varying the circuit response plus and minus 30$ observes whether critical timing problems occur within the defined logic. Timing problems discovered fall into two catagories, situations which cannot arise at system level and those which can. All timing problems are discussed with the logic designer and situations which fall into the first catagory are either ignored or prevented from occurring in future runs. Those which can occur are corrected by design change before proceeding. The reduction in our work of ATPG is realized by being able to determine test access inter-actively before the design is frozen. This also allows much shorter test sequence to achieve the same fault coverage. Directing your attention to Figure 2, we can see where the items 2-5 in the achievements list are documented. We can see that the man power required for out-of-house vs in-house ECO's is 2.5:1 but this alone is not the complete story. The manpower requirement however also determines the turn around minimum time and reducing this requirement to 2 man weeks allows a very short turn around. The manpower cost is at the same time the largest portion of the total cost (9:1). The major factor in manpower reduction is the very high quality of the test program plus a good set of documentation. In other fault simulators the run time is so extensive that they are normally not used due to the cost and long turn around time thus blocking the system thus lower quality. 180 C. GASKIN Further goals/requirements as seen by LITEF are a continuation of the present effort to bring it to a normal conclusion. After studying Fig. 3 one realizes that the loop is closed much too late to be easily corrected. In the time between the drafting effort, but before the film is drawn a verification must take place to ensure that at the ATS the hardware and its test program are based on the same design. An integrated design system where there is one input and many outputs all based on one central information source is the first major goal (see Figure k). The second major goal is improved design verification. The minimum improvement here is very accurate timing models and variable time simulation. This would necessarily include at least five (5) families of timing MOS, TTL, Low power TTL, Low power schottky TTL, and schottky TTL but a better solution would be the ability to model the real IC timing. The design verification should also encompass most other design parameters at the same time: fan-in, fan-out, ect. The third goal should be further cost reduction by general improvements and better man machine interface. During the next short term (2-3 years) LITEF sees the major problem areas as follows: 1. 2. 3. 4. Complete system simulation Large scale integration (LSI) modeling Long computer runs Accurate timing modeling Now that we have freed the engineer of much of the design verification task, at the board level, we are relying on him to solve a much more difficult task, design verification at the system level, This is not very logical as we have seen, on a lower level, that it is very expensive if at all possible. It is clear that we require computer support of this task and we should not despair as the nand requirement will, in most cases, not exceed one hundred thousand (IOO.OOO) for a large digital system. USER EXPERIENCE IN SIMULATION AND TESTING 1 8 1 INTEGRATED DESIGN SYSTEM Engineer Change order Engineer Sketch <fComputer Model INTEGRATED DESIGN SYSTEM DESIGN Verification I STIMULI Generator Complet« Release <^Complete>- Interactive LAYOUT Automatic Drafting TAPE Parts List Automatic Schem.Drawinc Placement Drawing Automatic Test Program FIGURE U That is to say as more and more LSI's are used more accurate timing is required and longer run times result. LITEF's experience with introducing an Automatic Test Program Generation System. This complex of problems must be approached as a many-sided single problem to realize a reasonable compromise. has met and exceeded original estimates of productivity and cost improvement. A further impacting parameter is the so-called small improvement that IC manufacturers make and is first discovered with a new lot of IC's with the same marking as before. The fact that they function differently is all too clear.182 C. GASKIN The next three items are in fact all self impacting. . nor are presently aware of a commercial system that would meet our requirements for an integrated design system in the next short-term period. In summary. to be found bymanufacturere of such tools but the IC designer and manufacturer must be brought into the loop in order to achieve a reasonable solution. Not only are the answers to these problems. the question is why. The task is not complete today. the system would not have to be modified as new devices became available. Hence. With the simulator. but the test quality was not known. Brussels 6 Luxembourg. A solid foundation was required to allow for the large variety of processes necessary in the complete system. COMPUTER-AIDED DESIGN o¡ digital ele c troni c c ir c uits and systems North-Holland Publishing Comapny Θ ECSC. the task of developing a thorough test was not economically feasible and almost not humanly possible. the system was complete. The processes continue to undergo improvements of speed and capacity. The threestate simulator was developed quite readily since all circuits were of only one component t y p e . editor. The t i m i n g analysis had t o consider gate tolerances and tester skew. The system of programs. The simulator could be used to simulate faults. Texas 75220 As the complexity of digital c i r c u i t s grew. a reverse trace f r o m the c i r c u i t outputs was implemented. an e f f o r t was i n i t i a t e d to develop a simulation and fault analysis program to aid in the production of tests for the sophisticated digital avionics of the A7E weapon system. Also included were all c i r c u i t inputs stuck high and low. LASAR (Logic Automated Stimulus And Response). The fault analysis included simulation of all N A N D gate outputs stuckatone and stuckatzero and all N A N D gate inputs open. For many c i r c u i t s . it became apparent that a t i m i n g analysis was necessary. Musgrave. the N A N D gate. C r i t i c a l paths would be sensitized f r o m the c i r c u i t outputs to include as many faults as could be detected by this method. This method guarantees a thorough test. EEC. A fault analysis had to be implemented. The need for a stimulus generator was known f r o m the beginning. A f t e r comparing simulator results to actual c i r c u i t results. The number of functions or possible faults to be analyzed approaches one 183 . With all of the mentioned elements. at LTV Aerospace Corporation in D allas. EAEC. Many engineers were shocked to discover that tests thought to functionally exercise the c i r c u i t did not test it very well at a l l . An a t t e m p t at a functional type system proved too cumbersome and incomplete. The N A N D gatelevel f a u l t analysis opened many people's eyes to the magnitude of the test generation task. The most important element in the LASAR system is the use of the N A N D gate as the basis for all c i r c u i t models. This set of faults was chosen because each N A N D and N A N D junction represents some function of the device or c i r c u i t . and to test the c i r c u i t each function should be v e r i f i e d .G. An automatic stimulus generator was badly needed. Wolski Scientific Machines Corporation 2612 Electronic Lane Dallas. In 1969. began its growth to become the most complete system of its kind in existence. Since all faults must be detected at the c i r c u i t outputs. Texas. 1979 DEVELOPMENT OF A D I G I T A L TEST GENERATION SYSTEM Paul E. The NAND equivalent approach made the description of devices very accurate and the processes of the system much more manageable. but the basic system is meeting the test of time. requirements for a computeraided digital test generation system became very apparent. The simulator then became a very valuable part of the system. Roberts and K e i t h T. So the simulator was modified t o do the fault analysis. The most common question asked today about gatelevel test generation systems is: "How can a system which expands circuits into NAND equivalent f o r m possibly manage circuits w i t h the largescale devices of today and tomorrow?" Microprocessors and large capacity memory devices produce c i r c u i t s containing tens and even hundreds of thousands of N A N D equivalents. but its development was accelerated at this phase of the program. an engineer could develop a reliable test. All others would by definition be undetectable. Only a detailed v e r i f i c a t i o n of all possible c i r c u i t functions is a reliable measure of test quality. only by improvement of the processes of the proven method.T. Consider the following situation: is easily implemented in .000 faults. The reason not apparent to other test generation systems is the use of the N A N D gate for all c i r c u i t models and processes. Recent improvements to the LASAR program allow processing of circuits containing 30. typically 80 megabytes of mass storage disk and specially designed instructions for test generation. A more powerful computer w i t h many times the capacity of most of today's minicomputers is needed. This system has successfully generated tests for circuits of ΙΊ. the fault analysis process as described by Armstrong computer hardware when only the N A N D gate need be considered. only LASAR has demonstrated all these properties. ROBERTS. In particular.000 faults. hence. 3. To date. The gate-level c i r c u i t description contains a near minimum representation of the elements of a c i r c u i t required to perform all the operations listed above. much of the necessary information is lost and consequently some of the required properties of the test generation system are lost. The most important properties required in a digital test generation system are: 1. WOLSKI m i l l i o n . Many program algorithms are being carefully analyzed to determine faster methods and the possibility of dedicated computer hardware to increase processing speed. Is this kind of analysis necessary? Is it possible? First let us determine what is necessary. The memory capacity. It must be accepted that the problem of test generation is not solved by a less complete method. processing speed and mass storage capacity of the host computer are all stressed by these c i r c u i t s . Worst-Case Timing Analysis — Device and tester tolerances must be considered to produce a test which w i l l not fail good boards. The temptation to use these other methods of representing circuits is fueled by the large and complex circuits of today. 2. The processing t i m e must be decreased by higher speed devices and parallel processing. It is interesting that the same devices that are causing the problem can be used to solve the problem. When something less is used to represent a c i r c u i t . Κ.184 P. A u t o m a t i c Test Generation — Today's and especially tomorrow's circuits are so large and complex that manual test generation is unfeasible. Many of the LASAR processes are very simple due to the use of the simple N A N D gate and. and Detailed Test Quality Analysis — Poor or unknown test quality is very costly. These are certainly achievable w i t h the same devices which are causing the problem. lend themselves to high-speed methods. Why t r y to solve the problem w i t h old tools? SMC is currently in the process of designing a computer dedicated to solve the test generation problem for these large c i r c u i t s . The complete analysis performed by LASAR on circuits of this size is unheard of on any other test generation system.ΟΟΟ NANDs and 70.000 NANDs and 100. The SMC-3I00 (Scientific Machines Corporation-3100) A u t o m a t i c Test Generation F a c i l i t y currently contains up to 524. humanly impossible and/or too costly. A number of items are necessary to accomplish the analysis of the super large circuits of today.E.288 20-bit words of memory. Tests without worst-case timing analysis cause costly t r i a l and error test program implementation and rejection of circuits which perform w i t h i n manufacturer's tolerances. It is obvious that the above l i m i t s must be increased. Another area where processing can be made more e f f i c i e n t is in the fault analysis process. 55 85. Similar operations exist for the processes of stimulus generation and simulation.27 Other circuits were analyzed w i t h the same process and the results were similar.DEVELOPMENT OF A DIGITAL TEST GENERATION SYSTEM A Β C D 185 For each input to the N A N D gate there is an associated list of faults which would cause that input to fail to the opposite s t a t e . When analyzing the m e r i t s of any " t o o l " one must keep in mind what objectives that tool must accomplish. The savings for larger c i r c u i t s is projected to be even more. It is often surprising to us how often this simple rule is overlooked.C. a very biased one which has 2 been shown in a number of studies to be unrepresentative of the t o t a l test quality. Table I shows the results of the Siemens' study.47 85. E.D] + E(SAO) This equation is very general and easily implemented in the computer hardware. but.B. As examples. thousands of faults can be processed simultaneously. Second order effects or nice features are always a welcome addition to any e f f e c t i v e tool but are a mere disguise to the ineffectual operation of that tool if the final objectives to be reached are compromised. C r i t i c a l Faults to E = AND[A. The computer t i m e and storage savings was considerable for these c i r c u i t s . In this manner. A study done at Siemens Corporation on random f a u l t sampling has shown that the quality of tests generated for a small sample of faults is only slightly less than that of the sample.09 85. which calculated ten times faster than any other computer but whose answers were always . This is a sample. Table 1 Random Fault Sampling 1/4 Sample N A N D Equivalents Gate Level Faults Test Patterns Percent CPU Time Of Total Percent Faults Detected Percent Faults Detected On Whole C i r c u i t 1/5 Sample 514 2176 752 100 88 544 459 60 86 436 459 . This is understandable based on the mathematical theory of sampling. To compute the faults c r i t i c a l to the gate output. Many test generation systems analyze IC pin faults. the following equation is used. These are only a few of the many ideas to lower the cost of test generation without resorting to significant decreases in test quality and quality assurance. could you be fooled into purchasing a c o m p u t e r . P a r t i i : F-16 A u t o m a t i c Test Program Generator Evaluation.186 P. No. Vol.E. This has been a problem t i m e and t i m e again in study after study by both m i l i t a r y and commercial users of all types of test equipment. Vol. Therefore. IEEE Transactions on Computers. General Dynamics. References Douglas B. C D R L A03H. easier nor w i l l it in the f u t u r e . since LASAR stands as the technical leader of today. Κ. or a compass whose accuracy is unparalleled but whose readout is illegible. 2 F-16 Depot Support Equipment Final Engineering Report. the obvious objectives of accurate calculation. . WOLSKI incorrect. comprehensive. It has been suggested that the philosophy on which LASAR is based is not practical for tackling the high density circuits of t o m o r r o w . CCP 5073. SMC-LASAR does not comprise any of the necessary test requirements simply to make our job. 5.2 1 . reliable transportation and knowledge of direction were misused. In these three examples. I. The objectives in generating digital test programs are also obvious. ROBERTS. accurate and repeatable test results which w i l l aid in greatly reducing your cost to manufacture your products. Armstrong. (1972). One point to be made is that only LASAR satisfies the quality test program requirements of today. the developer of LASAR.T. its developer has the greatest probability of meeting the requirements of the f u t u r e . α car which is easy to manuever but whose engine was in constant need of repair. A Deductive Method For Simulating Faults in Logic C i r c u i t s . C . (1977). These problems have led to the requirement for design techniques (or rules) in LSI which.of the individual circuits.3 testing of LSSD6 and interfaces to non-LSSD logic.A. It had the advantage of allowing the designer to use every technique he knew to obtain the best performance with the fewest circuits. 1. EAEC. result in packages that can be readily tested in the design. F. The text explains the technique for level-sensitive design which can then be expanded into a level-sensitive scan design. JONES ε DR. EEC. since ac parameters such as rise time. network subdivision. Thus. The text also describes other important areas of this approach including. This flexibility sometimes led to unexpected timing problems.Y. E.2. Problema encountered with testing unconstrained designs in LSI are reviewed.9. H. ABSTRACT This paper summarizes an approach to a testing system for LSI. COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Company ©ECSC. manufacturing and field environment. This resulted in a variety of design implementations. . LSSD.U. N. INTRODUCTION In the past. the design interface was well defined and reliably tested. channels. editor. it will become impossible or impractical to test each circuit for all of the ac design parameters. R. This approach was also supported in component manufacturing. SCHAUER IBM. the logic designer had great flexibility in the way he used circuits to implement logic functions in machines such as CPU's. fall time. 187 .G. it is important to find methods of designing logic subsystems that have low sensitivity to these parameters. and circuit delay could be readily tested. Brussels 6 Luxembourg. and complicated the testing.6 rules checking. DATA SYSTEMS DIVISION EAST FISHKILL. Thus. With LSI. when properly used. and control units. many of which had dependencies on the ac characteristics .S. the well-defined and reliably-tested circuit-to-circuit interfaces will no longer exist. 1979 AN APPROACH TO A TESTING SYSTEM FOR LSI MR. Uusgraue.10 LSSD permits the partitioning of large sequential logic networks (required for normal machine operation) into smaller combinational logic networks which can then be readily tested using existing test generation techniques. Consequently. then the steady-state response must be independent of the order in which they change. H. C. provided it is implemented properly. This design method.F. and circuit delay. JONES." It is clear from this definition that level-sensitive operation is dependent on having only "allowed" input changes. Under normal operating conditions. D. (Steady-state response is the final value of all logic gate outputs after all change activity has terminated). The clock signal. in general. C.188 2. Also. A hazard free polarity-hold latch in Figure 1. Other input signals have almost no restrictions on when they may change. if an input state change involves the changing of more than one input signal. has two input signals. SCHAUER LEVEL SENSITIVE DESIGN A design method will be outlined here that will provide reliable operation without strong dependence on hard-tocontrol ac circuit parameters. the internal state of the latch is set to the value of the data input. R. fall time. D. Thus. = 0. a level-sensitive design method will. the clock signal. called level-sensitive design. can be defined as follows: "A logic subsystem is level-sensitive if and only if the steady-state response to any allowed input state change is independent of the circuit and wire delays within the subsystem. the basic storage element should be a level-sensitive device that does not contain a hazard or race condition. C is 0 during the time when the data signal. A level-sensitive subsystem is assumed to operate as a result of a sequence of allowed changes in input state with enough time between changes to allow the subsystem to stabilize in the new internal state. In the detailed design rules. D. This prevents the changing of D from immediately altering the internal state of the latch. the latch cannot change state. When C = 1. will normally occur (change to 1) after the data signal. The polarity-hold latch meets these requirements. This . may be changed.E. A principal objective in establishing design rules is to obtain logic subsystems that are insensitive to ac characteristics such as rise time. Consequently. include some restriction of how these changes occur. This time duration is normally insured by means of clock signals that control the dynamic operation of the logic network. has become stable at either a 1 or a 0. these restrictions or rules on input changes are applied mostly to the clock signals. Its operation is as follows: When the clock signal. Terminal I is the input for the shift register. is shown in Figure 2. After A has changed back to 0. A and Β can never both be 1 at the same time if the shift register is to operate properly. The modification of the polarity-hold latch. The correct changing of the latch is not dependent on the rise or fall time of the clock signal. to include shift capability requires adding a clocked input to the latch and a second latch.AN APPROACH TO A TESTING SYSTEM FOR LSI causes the latch to be set to the new value of the data signal at the time the clock signal occurs. LI and L2. It consists of two latches. L2. by a change of the A shift signal to 1. the LI latch operates exactly like a polarityhold latch. LI. and the I (input) and +L2 (output) signals are strung together in a loop. A design for a polarity-hold shift register latch. data from the preceding stage are gated into the polarity-hold latch LI via I. When the latch is operating as a shift register. the Β 189 ( a ) IFi (a) (b) Figure 1 Hazard-free polarity-hold latch. and L2 is the output. (b) Logic representation (b) Figure 2: Polarity-hold SRL Ca) Symbolic representation (b) Implementation in AND-INVERT gates shift signal gates the data in the latch LI into the output latch. are connected in parallel. but only on the clock signal being 1 for a period equal to or greater than some time •0' where Τη is the time required for the signal to propagate through the latch and stabilize. . The shift signals A and B. to act as intermediate storage during shifting. As long as the shift signals A and Β are both 0. The interconnection of the SRLs into a shift register is shown in Figure 3. (a) Symbolic representation. L2. It will be shown later in this paper that the testing problem can be greatly simplified if level-sensitive polarity-hold latches are also capable of being operated in a shift register. SRL. where C^ë is any clock produced from C]_.F. All SRLs must be interconnected into one or . In addition. No clock can be ANDed with either the true value or the complement value of another clock. the following rules must hold: a) All clock inputs to all SRLs must be at their off states when all clock primary inputs (PI) are held to their off states.190 3. A sequential logic network designed in accordance with Rules 1 through 4 will be level-sensitive. two more rules must be followed: 5) All system latches are implemented as part of an SRL. b) A latch X may gate a clock C^ to produce a gated clock Cig which drives another latch Y if and only if clock C^g does not clock latch X.E. 2) The latches are controlled by two or more non-overlapping clocks such that : a) a latch X may feed the data port of another latch Y if and only if the clock that sets the data into latch Y does not clock latch X. b) The clock signal that appears at any clock input or an SRL must be controlled from one or more clock Pis such that it is possible to set the clock input of the SRL to an on state by turning any one of the corresponding Pis to its on state and also setting the required gating condition from SRLs and/or nonclock Pis. DESIGN STRUCTURE H. it must also be possible to shift data into and out of the latches in the system. either directly or through combinational logic. Therefore. R. 1) All internal storage is implemented in hazard-free polarity-hold latches as already described. c) 4) Clock primary inputs may not feed the data inputs to latches. To simplify testing and minimize the primary inputs and outputs. 3) It must be possible to identify a set of clock primary inputs from which the clock inputs to SRLs are controlled either through simple powering trees or through logic that is gated by SRLs and/or nonclock primary inputs. SCHAUER A specific set of design rules may now be defined to provide level-sensitive logic subsystems with a scannable design that will aid testing. JONES. but may only feed the clock input to the latches or the primary outputs. an output. each of which has an input. such that: a) each SRL or scanout PO is a function of only the single preceding SRL or register during the shifting scanin PI in its shift operation. N^. and shift clocks available at the terminals of the package. 6) There must exist some primary inputsensitizing condition. It is evident from Figure 4. Each of the combinational networks. any shift clock to an SRL may be turned on or off by changing the corresponding clock primary input for each clock. Ρ τ _ and P2 are primary inputs to the network and Z^ and Z2 are primary outputs. 191 b) c) If these design rules are followed.AN APPROACH TO A TESTIN G SYSTEM FOR LSI more shift registers. referred to as the scan state. a logic subsystem with two clock signals will have a structure as shown in Figure 4. and N2. that the two clock signals partition the logic subsystem into two parts. OUT_ IN w J £ X~ ] •Θ* I [•LJ > . multipleoutput logic network. C . all clocks except the shift clock are kept off at the SRL inputs. j _ and C2 are the two system clock signals. Θ— Γ * Chip Chip © € > *> 1 _ Γ ■ OUT Τ " 1 Chip JL Chip Θ— Figure 3: SRL and (a) (b) interconnection at chip module Chip with three SRLs Module with four chips Γ © • IM © * øj Figure h : ° © General structure for LSSD sub3y3tem with two system clocks . each composed of a combinational network and a set of SRLs. is a multipleinput. Thus. and for the time between clock signals to be long enough to allow all latch changes to finish propagating. C2 is zero and the inputs and outputs of N . some of the latches may change at C^ time. 0 Figure 5: —w&) Scan Oui LSSD double latch design . JONES. and N2 are taken from the LI latch. The network in Figure 5 is called a double-latch design. j _ are stable (assuming that the external inputs Ρ τ _ are also stable). f -θ· Cotnb>n*t«nal &^ f.F..- - > S»» 0 - Cl ©ASfwfl 0 - c. since all the system inputs to networks N. These signal changes immediately propagate through network N?. This structure meets the requirements for level-sensitive operation as defined in the preceding section and ensures that there is little or no dependencies on ac circuit parameters. since all the system inputs into network Ν are taken from the L2 latch. may occur. SCHAUER The operation of the subsystem is controlled by the system clock signals. Making use of the L2 latch reduces the overhead associated with such a method. Cl. is then allowed to pass to the SRL system clock input. The network shown in Figure 5 is another one that follows the rules. For proper operation of the logic subsystem. R. As soon as Ci is changed back to 0 and all LI signals nave finished propagating. The clock signal. all that is needed is that the delay through the combinational networks Ντ and N2 be less than the corresponding time between the clock signals. as is clear from Figure 4. For correct operation of the subsystem. C^ and Co· At C^ time. This gates the output values of N^ into the L]_ latches.192 H. all that is needed is for the clock signals to be long enough to set the latches. the next clock signal. The overhead will be discussed later in this paper..E. The network in Figure 4 is called a single-latch design. C2. on the other hand. 3) A network that performs the function of a large sequential network in its application. LSSD. Thus. The outputs can then be clocked into the latches and shifted out for inspection. This will. A sequential logic network that is level-sensitive with scan capability as per Rules 1 through 6 is called a levelsensitive scan design. and comes very close to obtaining 100% coverage of stuck faults. the automatic generation of test patterns is relatively easy. 2) The elimination of all hazards and races greatly simplifies both test generation and fault simulation used for testing large networks. Test generation for large sequential logic networks remains very difficult. For example. Although 1 and 2 above go a long way toward solving the LSI testing problems. This is easily done by operating the polarity-hold latches as SRLs during testing.AN APPROACH TO A TESTING SYSTEM FOR LSI The concept of level-sensitive design is completely compatible with the concept of three-value simulation? that has been used in designing many IBM systems. the combinational network N in Figure 5 can be tested in the following way: 1) A desired test pattern is shifted into the SRLs ( Y j _ » Y2 ···> ^n) anc^ applied to the primary inputs Pi· 2) After the signals have had time to propagate through . A properly designed level-sensitive logic subsystem can be simulated with three-value simulation without using any delay blocks. These aspects will be discussed further. 193 The scan capabilities of the network significantly help in its testing. For combinational logic networks. the ability to test networks as combinational logic is one of the most important benefits of LSSD. can be tested as combinational logic. provide a check on whether the design is level-sensitive. USE AND ADVANTAGES OF LSSD The use of LSSD helps to solve the LSI testing problems in the following ways : 1) The correct operation of the logic network is nearly independent of the ac characteristics of the devices and circuits. one way to effectively solve the sequential testgeneration problem is to reduce it to a combinational problem. in fact. any desired 'pattern of Is and 0s can be shifted into the polarity-hold latches as inputs to the combinational networks. because no general solution has yet been found to the problem of automatically generating test patterns for these circuits. During testing. and by shifting out the bit pattern in the SRLs. only the total delay over paths from the input to the output of network N need be measured.F. The output patterns can be obtained from the response outputs. X2. . . The use of SRLs to enter and retrieve bit patterns will enable dc testing of the logic subsystem. Thus. That is. R. other methods can be usedM. More specifically. That is.. X n signals into the LI latches of the SRLs. Any partitioning of the general structure shown in Figure S will result in a structure that can be tested in the same way. X2. This fact can be seen by considering the operation of the structure in Figure 5. X n .. only the maximum network delay need be controlled and measured. it will verify that the logic gates are properly interconnected and function correctly in steady-state operation. it enables the technician debugging a machine to monitor the state of every latch in the logic subsystem.. ···. X2. the clock C^ is turned on long enough to store the Xi.5.194 H. The only delay requirement. 2) Using the SRLs as shown in Figure 5 provides the ability to monitor nets buried within the chip. There is no longer any need to control or test rise time. Thus. the signals from the L2 latches must propagate fully through N during the time between the beginning of C2 and the beginning of C-^. some of the L2 latches of the SRLs may change state as a result of the signals stored in the LI latches. The shift register must also be tested. Moreover. X]_. as detailed later in this paper. then.E.. SCHAUER N. fall time. The delay or timing characteristics will not be tested by this method. or minimum network delays.11 Some of the other advantages of using LSSD are: 1) The correct operation of the logic subsystem is almost independent of any transient or delay characteristics of individual logic circuits. is that the worst-case delay through N must be less than some known value. individual gate delays are not important. These changes must propagate through the combinational network N and stabilize at Xl. X n before C^ can occur. the same method can be used to test at any packaging level. but this is easily accomplished by shifting a sequence of Is and Os through the SRLs. JONES. all logic gates can be given combinational tests by applying the appropriate test patterns at the primary inputs P^ and at the SRL outputs by shifting in serially. At the time C2 occurs. 3) The pattern in the LI latches is then shifted out and compared with the expected response. This will not disturb the state of the subsystem if the data . This can be done on a single-cycle basis by shifting all the data in the latches out to a display device. the total processing time is approximately proportional to N2. the base points are at Pi's or SRL's. it is possible to sub-divide it into smaller pieces. If the network follows the LSSD design rules. In this way. Network sub-division has another important benefit. described below. Combine the cones into sub-networks for test generation. Clock inputs are handled in a special manner. In practical terms. This can result in great savings in the test generation cycle as a new machine design is being developed. test generation need only be re-done on the network sub-divisions affected by the change or addition. If design changes are made during system bring up. Test generation can then be performed on each piece independently. The approximate size of the sub-networks is determined by the system user. Only the system data and scan inputs to SRL's are backtraced. and represents a trade-off between the number of test generation processing steps required to cover the entire network and the run time needed to perform each step. or if machine features are added. 5.2 . the status of all the latches could be examined after each clock signal. Stop the backtrace when a primary input (PI) or SRL is reached.AN APPROACH TO A TESTING SYSTEM FOR LSI are then shifted back into the latches in the same order as they are shifted out. A fair approximation is that if N=number of gates in the network. This step forms a "cone" whose top is the backtrace starting point. NETWORK SUBDIVISION Another important advantage of using the LSSD design rules is the ability to subdivide large networks into several smaller ones. The blocks and nets contained in each cone are recorded in a list. an attempt 2) . and the computer resources required to perform test generation rise in a more linear fashion with the number of blocks. In the process of combining backtrace cones. this means that an attempt to process a large logic network in a single piece will be more expensive than to process the same network in several smaller pieces. The cones formed by tracing back from PO " 0 " and SRL "M"'are shown in Figure 6. 195 It is well known that the machine computation time required to perform automatic test generation and fault simulation does not increase in a linear manner with the network size. The sub-division procedure follows a simple algorithm: 1) From each network Primary Output (PO) or shift register latch (SRL) do a complete backtrace of all paths converging on that PO or SRL. G H None G J STOPPING POINT SRLs B. SRLs D. JONES. SRLs C. When we apply this rule to the example. The other cones. 3 and 5. SRL C SRLs B. R. In order to capture a test result in an SRL. we find the following four sub-networks: . Thus.E. G. the cones are: CONE 1 2 3 4 S STARTING POINT PO PO SRL SRL SRL 0 Ρ M Ν Q BLOCKS INCLUDED L. the logic network which drives the clock inputs of the output SRL's of a sub-network must be included in that sub-network. they will be combined into a single sub-network. or those at which a response can be measured (PO or SRL). the system clock input to that SRL must be pulsed. form independent sub-networks. C. 2. Note that each sub-network will be bounded by points at which a test stimulus can be applied (PI or SRL) . Using Figure 6 as an example. SCHAUER is made to minimize the amount of logic replicated among sub-networks. D D C E COKE 1TR SRL H Figure 6 : Example Network for Illustration of Sub-division Procedure Because cones 1 and 4 have a common block.196 H. This is achieved by combining those cones which contain common logic into the same sub-network provided that the prespecified sub-network size is not exceeded.F. The test data for each of the sub-networks is independent of that for any other sub-network. G. system data paths between SRLs. The SRLs are used as inputs and outputs of the network. Test generation and simulation is performed individually on each sub-network. G. the first step in the automatic generation of test patterns is to verify the correct shifting responses of the shift registers. and to measure test coverage. D D C E 197 Clock driver networks feeding the input SRLs (B. D. SRL N Ρ M Q SUB-NETWORK BLOCKS F. Recall that the six design rules described earlier for LSSD apply primarily to the configurations and the control of specific paths in the network such as the scan paths between SRLs. K F. A 100% coverage may not be possible due to logic redundancies. and (2) a "shift" test. D. C. and the paths between the clock primary inputs and the SRLs. in the example) need not be included. A simulation step is performed to predict responses to the test patterns. Because the logic between SRLs is combinational. E. This suggests that it would be possible to test a network for compliance with the design rules by writing a program which could trace out the particular paths to which the rules apply. B. because the outputs of these SRLs may be controlled by loading the shift register of which they form a part. LSSD RULES CHECKING Another important requirement of a.AN APPROACH TO A TESTING SYSTEM FOR LSI SUB-NETWORK OUTPUTS PO PO SRL SRL 0. L Η F. . The test consists of two parts: (1) a "flush test. J SUB-NETWORK INPUTS SRLs SRLs SRLs SRLs B." in which all shift clocks are turned on and a signal is flushed through the register from scan input to scan output . approaching 100% is normally obtained. the list of blocks and nets for the sub-networks are used to select and build model tables for test generation and to construct sub-network fault lists.testing system for LSI is a capability for the automatic checking of logic structures for compliance with the design rules which are established. Analysis has shown that these types of test patterns are sufficient to detect stuck faults in the shift registers paths. The tests for the stuck faults in the sub-networks are generated by use of an algorithm similar to the D-algorithm of Roth . in which a 00110011 pattern is shifted through each register. Next. C. C. Hence. TEST GENERATION FOR LSSD SUB-NETWORKS The generation of input stimuli to test the various LSSD sub-networks utilizes the shift register capabilities of the SRLs to 'load' in the patterns. a very high test coverage. A behavioral model can vary the algebra used to calculate an output value of a gate. If the rule concerns the clock signals. The adaptation of the logic simulator to allow rules checking is accomplished by modifying the routines which cal culate the response of the logic gates to an applied stimulus.E. Call this input with logic 0. and then simulate the network. which requires that clock inputs to all SRLs must be 'off' when all clock primary inputs are 'off'. The sequence of actions performed by the simulation scheduler is organized into distinct steps which correspond to checking procedures for the various rules. regardless of the signal levels on the other inputs to the gate. suppose that we have constructed a multiinput combinational network composed entirely of AND gates such as the network shown in Figure 7.F. dominant values will be forced on nets in the clocking paths. Note that this is not done in conventional logic simulation because the gate calculation routines always execute a fixed and predefined algorithm.198 H. Suppose we want to check compliance to Rule 3a. the behavioral models are programmed to calculate gate stimulus response relationships according to the rules for the three . This same idea is used in a rules checking system. If a logic zero signal is placed on any input of a multiinput AND gate. called behavioral models■ Behavioral models are provided for primitive logic functions. but the calculations performed are modified and the sequence of the control statements executed by the program is carefully structured to permit testing for violations of the various rules. then the behavioral model used will force a dominant logical value on all nets in the scan path. That is. A fter simulation. "A". The modified routines are small programs. SCHA UER The basic idea behind the design rules checking is that a logic simulation program may be used to perform the rules checking. The behavioral model used is determined by the rule being checked. NAND. such as AND. The procedure is similar to that performed to verify the design of a logic network. and NOR. This makes it possible to use the simulation program to perform many different types of tracing operations. the zero value is dominant. set all other inputs to logic 1. The method by which a logic simulation program is used to perform tracing functions may be easily explained. OR. R. These models can vary the stimulusresponse relationship of the logic gates to provide different algorithms for checking compliance to the various rules. Now. such as an AND gate. The behavioral models are called by the simulation scheduling routine whenever an input stimulusoutput response calculation needs to be performed for a particular type of logic gate. For this test. Set only one input to the network to logic 0. it forces a zero on the gate's output. An example of rule checking is shown in Figure 8. JONES. If the scan path along the shift register is to be traced out. because every gate in these paths will be a logic 0. Behavioral models are also provided for the SRLs. all the paths through the network which begin at "A " will be defined. . all the gates that it drives will be calculated. Additional types of checking can easily be added.K. 199 O. The output values of all gates driven by the clock primary inputs are then calculated. In this example. an X remaining on SRL clock input indicates a violation of rule 3a. an error message is provided to allow the designer to quickly locate and correct the problem. In case of a violation. (The •"0" and "1" states are the usual logic values. insure that the gate calculations are appropriate for the rules being checked. All internal gates in the network are also set to X. The process repeats until the clock signals reach a primary output or an SRL. If any gate's output changes from its · initialized X value. we set all the clock primary inputs to their "off" (inactive) state. and set all other primary inputs to X. In each case.AN APPROACH TO A TESTING SYSTEM FOR LSI valued logical operations shown in Table 1. HERE Λ 0 ° ΛΝΟ / 0 AND AND 0 SRL 0 / SRL CLOCK INPUT AND 1 X AND 1 OR X AND » 0 ' jj r~ SRL CLOCK INPUT \ CAN'T TURN OFF CLOCK (VIOLATION) Figure 6: Example Network for Illustration of Sub-division Procedure Figure 7: Example of Use of Logic Simulator for Path Tracing Using this behavioral model. The remaining rules are tested in similar fashion. the checking system may be extended to include complex designs which contain a wide range of hardware. the "X" means: don't care). the various behavioral models used. Because other block types such as memory arrays may be represented as behavioral models. Automatic rules checking capability allows the designer to check his own logic design for compliance with the design rules. the design rules can be expanded to provide the ability to partition the logic so that the LSSD portions may be handled as stated previously and the non-LSSD portions may be handled by other existing automatic or manual test generation techniques. However. the cost overhead at the card or system level is considerably less than 20%. SCHAUER COST/PERFORMANCE IMPACT OF LSSD The negative aspects of LSSD include the following: 1. R. All timing within the subsystem is controlled by externally generated clock signals. LSSD AND NON-LSSD MIX In cases where non-LSSD designs must be intermixed with LSSD logic. they may eliminate other interconnections and I/O points that would otherwise be required. The clock and the distribution system for the clock signals can be accurately designed and tested to minimize skew. However. 4. the Array/LSSD arrangement does not follow the LSSD rules. since the relationship is not one-to-one. Because an array contains memory. JONES. Even for the worst case. The overall performance of the subsystem may be degraded by the clocking requirement. but the effect should be small. if these I/O pads can be shared to also provide a standard interface for operator and CE consoles. The general interface between LSSD logic and arrays is illustrated in Figure 9. The actual cycle time is determined by the worst-case delay paths. just as in any other design method. The requirement for additional I/O pads at chip and module levels is a concern. the orderly structure of the array allows the use of automatic test generation methods for the combinational logic between the SRLs and the array.F. Separate test patterns for the array must be provided by the array designers. 3. 2. External asynchronous input signals must not change more than once every clock cycle. so there is no inherent reason why the design rules should greatly increase cycle times. The logic gate overhead for implementing the design rules has ranged from 4% to 20%.200 H. The polarity-hold latches in the shift registers are logically two to three times as complex as simple latches.E. Up to four additional I/O points are required at each package level for control of the shift registers. the difference is due to the extent to which the system designer made use of the L2 latches for system functions. The test system can then translate . The general form of an SSRL is shown in Figure 10. since testing' through the SRLs is by necessity slow ( limited by the scan-in speed). Here Ci and Cj are clock signals provided by the System Clock to prevent unwanted outputs from occurring during scanning operations. Other non-LSSD networks may contain asynchronous sequential logic. The LI and L2 are connected to form a SRL as discussed previously while L3 is a "stable" latch used to provide a stable system output that will not change during LSSD scan operations. analog networks or specials which contain data storing elements that do not follow the general LSSD design rules. These non-LSSD networks may be partitioned from LSSD networks by use of Stable SRLs (SSRLs). The testing of. and then propagated through the logic on the arr ay outputs so as to be observable at POs and/or SRLs. . AC tests cannot be guaranteed through SRLs . 201 Figure 9: General Structure of LSSD/Array Interface The logic preceding the array is t ested using stimuli presented to it via SRLs and Pis. Figure 11 shows the general interface between non-LSSD and LSSD logic using SSRLs. SRLs and/or Pis and the outputs observed on POs and/or SRLs. It should be noted that high speed tests often used in array testing may not be available unless the array inputs are controllabl e (possibly through combinational logic) from Pis. The output of the logic is written into and then read out of th e array. For an embedded array.AN APPROACH TO A TESTING SYSTEM FOR LSI these array test patterns to the appropriate scan-in and scan-out tests and combine them with other test patterns for the network.the logic at the array outputs requires that the proper stimuli be applied from the array. 5..iwcK» I. The following is a summary of some of the benefits offered by this design approach: 1. and provides a standard interface for operator and maintenance consoles. 2. The development and use of tools for design verification. \ ·.-. It outlines a logic design and testing technique that eliminates or greatly reduces many of the problems in designing. CONCLUSIONS Important aspects of an approach to a testing system for LSI have been presented. fall time. 1 ii ■ o«« c __J tut«· 13 LffCtl Ibi SSRL c. manufacturing. 4. SCHAUER aVlMm C i u k r 1 ■ LESO N. and maintaining LSI systems. 3. simulation and for checking is simplified. . 1*1 J^s S. JONES.E. System performance is not dependent on hardtocontrol ac circuit parameters such as rise time. The insensitivity to timing problems and the modular design structure help reduce the impact of engineering changes. The ability to dynamically monitor the state of all internal storage elements is inherent in the design. 1 . simplifies manual debugging. It is dependent only on the longest path delay being less than some specified value. R.202 H. Figure 10: General Form of an SSRL Figure 11: LSSD/NonLSSD Interface 10. or minimum delay.F. Test generation and testing are simplified to the well understood method of combinational logic network testing. This eliminates the need for special test points. SSRL ( Non-LSSD Network '. T. S. "A Logic Design Structure for LSI Testability". Williams. S. P. Franklin. U. 1 to 0. (References 1.AN APPROACH TO A TESTING SYSTEM FOR LSI 6. Franklin. Williams. 14th Design Automation Conference. June. H. P. "Introduction to an LSI Test System". X to X AND 0 1 0 0 0 0 1 0 1 X X 0 X X X OR 0 0 0 1 1 X X X X 1 X ACKNOWLEDGMENTS In the preparation of this paper. Godoy. Petrini. Orosz.A. 1. Eichelberger and T. 6). R. 203 7. B. France. Proc. N. The method used for testing chips and modules can also be used for diagnostic tests in the field. Ε. "Automatic Checking of Logic Design Structures for Compliance with 3. 2. Petrini. 3. W. F. W. Gorges. TABLE I THREE VALUED LOGICAL OPERATIONS WOT will change 0 to 1. Bottorff and G. 14th Design Automation Conference. F. J. Correia.S.. Godoy. H. REFERENCES M. by M. G. Eichelberger. The level-sensitive design allows the use of a unit logic hardware simulator for development design without creating timing problems in the transition from unit logic to dense functional chips. Proc. Β. 1977. C. 1977. E. Bottorff. June. C. 2. E. June. and E. B. H. . 1977. extensive use was made of the information and material presented at the 14th Annual Design Automation Conference held in New Orleans. B. Correia. Davis. June. JONES. Roth. 14th Design Automation Conference. P. "Delay Test Simulation". T. 7. June. June. September 25. 11. IBM J. S.204 H. 1974. Eichelberger. pp. R. Storey and J." U. 1973. "Hazard Detection in Combinational and Sequential Circuits". 10." IBM J." U. E. P. E. Proc.. p.. "Method of Propagation Delay Testing a Functional Logic System. H. France. pp. Rasmussen.S. Eichelberger. 1977. 14th Design Automation Conference. M. Vidunas and W. Develop. 9. J. 1977. B. Res. B. R. B. "Diagnosis of Automated Failures: A Calculus and a Method. . SCHAUER Testability Ground Rules". T. "Level Sensitive Logic System. "Method of Level Sensitive Testing a Functional Logic System. Garges. Proc.E. J. 1977. Res. 5. N. "Delay Test Generation". and E. Hsieh. Develop. 4. J. 1977. June. 14th Design Automation Conference. "Test Generation for Large Logic Networks". Bottorff. Patent 3783254. R. E. E. B. Barry. Patent 3784907. A. 10_ (1966). S. Patent 3761695. E. 278-291. Eichelberger. January 1. 9 (1965). 8. 6." U.F. Proc. S. L. Eichelberger. 90-99. 14th Design Automation Conference. Proc. Orosz. W. January 8. E. 1974. P. TECHNICAL SESSION V Chairman: G. Brunei university. united Kingdom . MUSGRAVE. . ROMA. to a c c e s s the d a t a b a s e h a s b e e n r e a l i z e d . T o m l j a n o v i c h . h a s b e e n c o n c e i v e d by J . T h e d e s i g n g o a l s . C o l a n g e l o SELENIA S . A . Uusgrave. C E . are described.. T h e next s t a g e i s t h e r e f o r e the i n t e g r a t i o n of e x i s t i n g automated s y s t e m s . such a s c o m p u t e r n e t w o r k s and d a t a b a s e s . g . EEC. today on the fact t h a t . p . the d e s i g n of p . r e l a t e d to the d e v e l o p m e n t p h a s e s in logic d e s i g n . s t r u c t u r e and c o n t e n t s of the d a t a b a s e will be d e s c r i b e d . A . I n t e g r a t i o n h a s t h r e e main o b j e c t i v e s : . 1979 G. c . the way in which information is h a n d l e d i s of p r i m a r y i m p o r t a n c e for s u c c e s s f u l e n t e r p r i s i n g . t h r o u g h the s h a r i n g of d a t a b a s e s and the on line c o n n e c t i o n of p r o c e s s e s . f a c i l i t i e s ) in a s i n g l e w o r k i n g c y c l e ( e . the v i t a l need to r e d u c e d e v e l o p m e n t t i m e s in o r d e r to c o p e with a highly c o m p e t i t i v e m a r k e t . in an integrated information n e t w o r k . CÖMPüTER-AIPEP PESIGN AN E N G I N E E R I N G C O M P O N E N T S DATA B A S E M. t e s t i n g . Brussels S üaembou/ig. E . b . ) . . V l i e t s t r a and i s d r a w n in f i g . c a l l e d R . for i n d u s t r i a l o r g a n i z a t i o n s . w i r i n g . forced the o r g a n i z a tion to u s e c o m p u t e r i z e d t e c h n i q u e s to h e l p with the high flow of the t e c h n i c a l i n f o r m a t i o n and to s e t up automated s y s t e m s in s p e c i f i c a r e a s . d e v o t e d to s e r v e a l l u s e r s in the c o m p a n y . editor. An online f a c i l i t y . In a rough s c h e m a t i z a t i o n .ai digital electronic circuiti and syitemi North-Holland Publishing Company »ECSC. M o t i v a t i o n s . The d a t a b a s e i s c o n s i d e r e d a s p a r t of a l a r g e r c o r p o r a t e t e c h n i c a l information s y s t e m .r e d u c t i o n of the o v e r a l l t u r n a r o u n d t i m e . the s t r u c t u r e .b e t t e r u s e of c o r p o r a t e f a c i l i t i e s . ( e . A . . FOREWORD T h e r e is a g e n e r a l c o n s e n s u s . it i s p o s s i b l e to s e e the i n t e g r a t e d n e t w o r k made of two c o m p o n e n t s : the net of the a r e a s r e q u i r i n g the same d a t a (more g e n e r a l l y . 1 . C .c e n t r a l c o n t r o l of r e s o u r c e s and c o s t s r e l a t e d to s p e c i f i c flows of a c t i v i t i e s . b . t o g e t h e r with some d e t a i l s on p h y s i c a l i m p l e m e n t a t i o n . R . 207 . d o c u m e n tation. and the net which c o n v e y s i n f o r m a t i o n among d i f f e r e n t w o r k i n g c y c l e s ( u s u a l l y p e r f o r m e d o v e r d i f f e r e n t p e r i o d s of t i m e ) . p . . The d e v e l o p m e n t of new and powerful computing t e c h n o l o g i e s and t e c h n i q u e s . .ITALY The p a p e r p r e s e n t s the e x p e r i e n c e c o n d u c t e d in S e l e n i a in d e f i n i n g . g . A block r e p r e s e n t a t i o n of the net of the f i r s t t y p e . P a r t i c u l a r l y in e l e c t r o n i c i n d u s t r y . make it p o s s i b l e to go f u r t h e r in the p r o c e s s of a u t o m a t i n g i n d u s t r i a l a c t i v i t i e s . c . EAEC. ) . . both people and automated s y s t e m s . the fast evolving t e c h n o l o g y . e t c . and the two b a s i c modes of o p e r a t i o n ( d i r e c t a c c e s s and a s s o c i a t i v e r e t r i e v a l ) of R . implementing and u s i n g a d a t a b a s e s y s t e m to manage t e c h n i c a l infor mation on c o m p o n e n t s u s e d in e l e c t r o n i c i n d u s t r y . gz MANUFACTURE COMPONENTS DESIGN RLE ΠΙΧ s S PRODUCT STOCKS MAN. COLANGELO DESIGN VERIFICATION CAM PROCESSING INPUT PROCESSOR λ DA LIBRARIES FIGI-DESIGN DESIGN DATA BASE λλ COMPONENTS DATA BASE TO OTHER -FLOORS- "FLOOR": AN INTEGRATED DESIGN AUTOMATION (IDA) SYSTEM A / νQUALIFY CONTROL ^ZZZZZ^L LOGISTICS \τζζτ y ' ν Τ- U DOCUMENTATION . TOMLJANOVICH. ANO PARTS PROCUREMENT ~* Η 11 I I I τ FICURE : .INFORMATION LINKS IN DEVELOPMENT-PRODUCTION PROCESSES .208 M. R. Each library is dedicated only to a specific application. c . documentation d p t . topological data.human beings (designers). Basic DA systems in traditional application areas like p . if automated procedures have been experienced and sufficiently "digested". . we can imagine the entire organization as a building in which "elevators" correspond to files of data and/or information (see fig. e) design. implementation and use of a stock management & part procurement data base (operational). Selenia is just in this situation. systems' integration can be considered as a logic consequence of what delebped in order to get "economies of scale". schematic of components e t c . subsystems require libraries of simulation models. The major drawback to its realization comes from the resistance of the organization to any structural change. the systems in b and c will be presented. aiming to an integrated s t r u cture such as in fig. b . fabrication. layout. INFORMATION SYSTEMS AND AUTOMATION IN SELENIA The information structure presented so far has to be considered more a trend than a realily. If we try to push forward the similarity. . moreover selected design data are intercepted and accumulated into a design file (data base structured) on which the required central project control can be implemented. CONCEIVING A COMPONENTS DATA BASE The Engineering Components Data Base has been originally conceived by the Design Automation Group with the purpose of serving: . c) design.2). implementation and use of an engineering components data base (operational). D. the outlined structure implies new constraints and rules (standardization)whose introduction must be very carefully planned and timed. Hereafter. d) design. 1 (just started).AN ENGINEERING COMPONENTS DATA BASE 209 The highly interacting activities. can be similarly automated using a structure in which a main "aisle" supports the traffic going into and out of the various application "rooms". implementation and use of an application system utilizing the components data base. called RACE (operational). Other floors such as Quality Control d p t . .A. called SIGMA (operational). implementation and use of an application system utilizing the stocks management data base. So far.A. . wiring and testing have been implemented and generally accepted as useful tools by development dpts. As a whole. b) design. subsystems).A. communicate among themselves through an information "bus". logistics.automated procedures (D. On the other side. Moreover. e t c . subsystems in operation. in Selenia praticai actions along the guidelines above drawn consist of: a) a design review of the D. the picture represents the automation of a complex of activities located at the design "floor" of a company. delimited by different blocks. subsystems work on the same "floor" ( i . "design with testing in mind"). e . reliable.access to the data base from remote sites should be taken in account. THE ENGINEERING COMPONENTS DATA BASE The specification of the Engineering Components Data Base can be summarized in the following points: the data base should store all the relevant data about components to satisfy the user community in the company. the different plants were connected to the central computer through batch and interactive terminals. availability. if we examine design activities. Application software has been developed in ASCII COBOL. there is need for a great amount of data. A large amount of information must be therefore available at design time. e .A. . There i s . as an example. The necessity that information. it has been recognized that many areas in the company share with designers the need to access up-to-date. In particular. At production time. using standard DMS11Ü0 data management system. to allow comparisons and ease choices. 'COLANGELO The various D. a basic difference in utilization of components' data during the design or the production phases. and consistent components' data. the most of the job being verification and retrieval of few. Hardware and Software configuration The data base has been implemented on a UNI VAC 1100 series machine. TOMLJANOVICH. The reasons for the choice were the following: . almost no choices have to be made. usually handled by different departments in the company. A central data base has therefore been considered the right choice to solve the above problems . Moreover. but suggested to take in account different access methods for the on-line enquiry system. e t c . g . meeting of standards and specs . . Quality Control. be supplied to a large population of designers at the same time. R. g . easy connection for any requiring application system should be allowed. At design time. reinforced the need of a central components data base and suggested the development of an on-line enquiry system. costs. The number of parameters and constraints to be considered during the specification phase of a project has been increased by the need to prevent fabrication and production troubles ( e . there is a clear necessity to correlate and synchronize the updating and handling of l i b r a r i e s . documentation dpt. arranged in synthetic r e p o r t s . are operated in the same working cycle).the Company owned a central computing facility based on a UNIVAC machine. .210 M. a great part of the above information concerns components such as technical characteristics. In order to avoid delays. The increase of eligible users has had little effects on contents and structure of the data base. manufacturing etc: as a matter of fact the whole company. In the last few years technology evolution has greatly changed design methodologies. specific data. suppliers. For compatibility reasons with systems already in operation. manufacturer type . accessing central computer only to get data about the selected components. All the data fit in a structure made of thirtytwo record types. Italian and English d e s c r i ption'. linked by'twenty different sets and stored in nineteen a r e a s . with the purpose to give a feeling of the structure and show the allowed access paths. Equivalences a r e verified by Q . systems are simulation and testing models. data for design verification. In figure 3 there is the list of the primary requirements expressed by the six broad areas in which users have been conventionally divided. and a r e used in verification of maintainability. Reliability parameters are those involved in computation of MTBF of systems. Data for D. the first is in charge of system . A reduced schema has been drawn in figure 4. e . diagrams have not been considered. The classes of data stored in the data base have been carefully chosen by analysing u s e r s ' need. components used or eligible to be used in products manufactured by Selenia. Dashed lines delimit the data structure devoted to the associative retrieval. and imply the possibility to replace one component with another. there is the possibility to transport that structure to a stand alone computer. First of all. group and Quality Control Dept. Moreover. Other data stored a r e : the different names of components ( i . e t c .C. Schematic and layouts have a reference number to a drawing handbook. System Administration The load of the system administration falls on D. documentary partnumbers from co-contractors). it is possible to avoid selection of out of date components (stared in the data base only for documentary purposes) by avoiding their insertion into the structu re for associative retrieval. e t c . with no r e ference to the stress of the system in which the component itself should be used. Quality level is the ability of a component to meet standards (such as military. spare parts planning and logistic supports. This would imply no modification to the overall system.. The latter is responsible for data collecting and validation. quality audits. C . to be d e s c r i bed later. C .) Dept. The reliability data derive from the complexity and failure reports of a component. e .A.A. the actual data have been left in application l i b r a r i e s . information about manufacturers and suppliers. failure r e p o r t s . Its weak connection to the remaining structure gives high freedom in managing the system. topological data. Each rectangular box marks a possible entry point to the s t r u c t u r e .A. Nato stock number.AN ENGINEERING COMPONENTS DATA BASE 211 some D. ) . Contents and data structures Object of the data base a r e all those components certified by the Selenia Quality Control (Q. as declared by its manufacturer and verified by Q . At present. Technical data comprise performance and characteristics of a component. application systems were operational on the central machine. environmen tal. and the data base stores only their direct references. i . QUALITY LEVELS SUPPLIERS DESIGN QUALITY CONTROL MANUFACTURING DOCUMENTATION LOGISTICS PURCHASING FIGURE 3 . TECHNICAL DATA. DESCRIPTION RELIABILITY DATA. SUPPLIERS AND SECOND SOURCES EQUIVALENCES DOC.CLASSES OF REQUIREMENTS FOR COMPONENTS DATA BASE ASSOCIATIVE ¡RETRIEVAL MFR/SUPPLERN OATA D. SYST: SIMULATION AND TESTING MODELS. \ y^CFtXHCCSj NATO STOCK NUMBER J FIGURE 4 . GEOMETRICAL DATA QUALITY LEVEL.212 M. R. PARTNUMBER.SIMPLIFIED SCHEMA FOR ENGINEERING <t>MPONENTS DATA BASE .i. TOMLJANOVICH. COLANGELO SYNTHESIS: TECHNICAL DATA APPL. AN ENGINEERING COMPONENTS DATA BASE performance and control over application systems. g . (Ricerca Associativa Componenti Elettroni^ ci. Functional operations The user interface has been designed taking in account human engineering techniques. THE RACE SYSTEM The objective to make the components data base available to a large population of users as seen before. The volume of components handled up to now. how many components have been selected. e . e . statistics. gates) six steps going through six different parameters. A .batch update. e . The system hands back the result of the search. the question will be asked again. e t c . Results will be shown through a series of reports (logical "pages"). . such as technology. from the last requirement. On e r r o r or unsuccessful search. recovery. E . is roughly corresponding to 130 thousand manufacturer's types. etc. which asks questions and (whenever possible) suggests a list of suitable answers. a . manufactur's type.in the form of a table. It must: . The dialogue is under control of the system. i . C . or ask for display of results at any point of the session. homogeneous characteristics. support interactive associative retrieval . outlined the specifications for an application system. i . i .provide online access to the data base. i . The data structure for the search has been shown in figure 5 . e . . access data by means of one of the names of a component ( e . The global components' turnover can be quantified in 5 pet new acquisi tions and 10 pet revised per y e a r . g . the user is requested to supply one or more values. both for consistency verification of the data base and for report generation.considered the most innovative one. for steady maintainance of data.). log facilities.online update. Integrated Circuits) selection of the subclass ( e . each of which shows on the VDU screen. Search goes through eight steps: selection of the class of components ( e . Associative Retrieval of Electrical Components). The user can abort the session. for initial loading and high volume updating. a number of tools have been provided: .have a user interface designed for people with little or no acquaintance with compu ting systems. g . logical functions. go to the beginning. 213 In order to support administration. . selection of components which math given technical c h a r a c t e r i s t i c s . providing reports on system usage. For each parameter. The interactive facility works on a "question-answer" basic. Selenia partnumber. go back of one or more steps. support cross reference. The name chosen for the system was R . COLANGELO POMER ARRAY L TO ACTUAL 'DATA FIGURE 5 .DATA STRUCTURES FOR ASSOCIATIVE RETRIEVAL AND OUTPUT SUPPORT FIGURE 6 . R. TOMLJANOVICH.214 M.ACCESSES TO THE ENGINEERING COMPONENTS DATA BASE . but also in terms of human beings satisfaction. In figure 6 there is a schematic representation of the environment of the data b a s e . dictionary of transcodes for decoded outputs. by the CAD technical community. Output information. designed for stocks management and parts procurement. Subclass and Selection parameter records (fig. g . have been stored in Class. The best automated system counts for nothing. b . It i s . together with subsequent questions to be asked by the system. together with other data such as the list of names in the actual data records have been stored in the structure of figure 5 . for IC technology the system would prompt all the suitable technologies. so that RACE programs do not depend on them. This is a very valuable tool. together with computer r e s o u r c e s . . automation developers. read and write formats. the most important. Batch connections have been provided for D. if it is not accepted by the organization and does not offer any benefit to the u s e r . e . For this reason.A. therefore. System implementation has been deliberately only sketched. Help information and output formats highly depend on the class of components. just because they raise non-technical problems. CONCLUDING REMARKS The purpose of the paper has been to present a real experience concerning distribution and utilization of technical information in industrial environment. The first two must be considered. . begin to consider success not only in terms of increased productivity. People and organization. gets cross reference data from the RACE data base. In order to avoid the proliferation of ad hoc programs to satisfy single queries. Quoting E . The cross reference access is straight forward: it allows selection of information by entering "names" of a component or supplier. QLP is primarily used for data base administration. structural data have been stored in the data base. a r e the three faces of any CAD (or DA) system.a). and supports the same output facilities as the interactive a c c e s s . such as TTL. namely that work and leisure a r e complementary parts of the same living process and cannot be separated without destroying the joy of work and the bliss of leisure". OTHER USES OF THE DATA BASE There a r e many application systems other than RACE connected to the data b a s e . the "prophets" of the new industrial revolution based on computer technology. it has been made possible the use of the standard query language processor UNIVAC QLP1100. desiderable that we. user may ask for help. .would be a complete misunderstanding of one of the basic truths of human existence. systems.. F . Output format records contain headlines. decoded and formatted outputs make a "friendly interface" to the u s e r . The SIGMA system. . but requires a good skill in order to be effective and preserve the integrity of the data b a s e . few and easy-toremember commands. MOS e t c . Schumacher (from the book "Small is beautiful"). Aiding facilities."to strive for leisure as an alternative to w o r k . . logistics and technical documentation application systems. in favour of the analysis of users requirements and system position inside the organizational s t r u c t u r e . 5. Help information.AN ENGINEERING COMPONENTS DATA BASE 215 At each step of the session. . Brussels 6 Luxembourg. Philips / Elcoma Division Nijmegen. EEC. was the dream of the technologist. Related to that is c) The design time: How fast can a product be announced on the market once the spec.G. so a faultless and smooth transition and feedback is possible from one step to the other. is ready. Why capitals for integrating? Because a system on a chip is more than a bunch of gates which by accident happen to be within a lOOu distance of each other. the I.C. Klomp N. 2. 2000 or more gatefunctions on one chip. editar. By consequence the tools have to be constructed in such a way that they are easy to learn and transparent to the customer and guarantee that a good product can be designed without detailed knowledge of the technology chosen.C. logical and lay out properties and last but not least testing possibilities have to be merged. instrumentation and consumer circuitry military equipment etc. It requires inte grated thinking and therefore the tools should have very tight connections and well difined interfaces between the different phases of the design traject. fictions tend to turn into realities.G. However since Jules verne wrote his: "Round the world in 80 days". EAEC. custom design was only feasable for very rare specialties and talking about economics in this context was a contradiction in terms. The development of several tools was necessary to make the reality of economical LSI happen. It is every where understood that this is impossible. 3. b) The hit rate: how many reruns are needed before the chip is according to the given perfect specification. etc. Because LSI. That however puts the burden on the customer to do at least a large portion of the design himself.M.V. To determine what properties these tools should have. maker should have in house experts of all these disciplines. manufacturers one can design telephone exchange systems. However a collection of 1000 2000 gates was in earlier days called a (sub)system. computers. in the sense of 1000. Economics A number of items contribute to the economy of a design: a) The one which is recognised worldwide is the number of square microns. The Netherlands SYSTEM REQUIREMENTS Some years ago the sentence: Custom LSI design economics was a fiction. The last μ should be squeezed out. but they are no experts in the different technologies. When they come on one chip. Custom design With standard building blocks developed by the I. COMPÜTER-AIPEP PESIGN oí digital ele c troni c c ir c uits and systems North-Holland Publishing Company ©ECSC. A system on a chip means that electrical. LSI. 217 . Large scale means that a large number of functions are put together. Uusgrave. 1. let us have a closer look to the different aspects mentioned in the title. so we are INTEGRA TING a complete system on one chip. 1979 CUSTOM L S I DESIGN ECONOMICS J. As far as e) 13 concerned:this is a designphilosophy and if you are on the wrong track. step by step. the result should be. even the best tools cannot help you. easy to handle. SYSTEM DESCRIPTION With the above in mind the LOCMOS design system has been developed. The program will not be the beauty of the nation. Designers want fast turnaround. comparable with the dimensions of the handlayout of an "average" designer. Both computer experts and designers speak their native tongue. . They are cheap. but at least it is used. With an interaction between designer and computer a good balance can be obtained between the "creative mind" and the abilities of the machine for fast and accurate calculations and checkprocedures. turns out not to be as perfect as expected. b + c) The first shot should hit the target specification so considerable effort should be spent on safety. however unfortunately in a different way. therefore: χ Develop the system not in research but in the middle of the users. USER ASPECTS Before describing the sytem developed for LOCMOS in Philips some user aspects have to be mentioned. A computer aided design system is not for the fun of computer people nor just for once but is for designers for every day use.218 J. For the tools this leads to the following requirements: a) With respect to area. reliable and always to your disposal without the risk of being busy with the high priority managerial/planning jobs which are trying to find out why that design in the queue does not stick to its planning. e) Testing: a point that is often forgotten: how easy and by consequence how cheap are the good devices separated from the bad ones. M The computer is not designed to do "creative" things so do not let her do it. KLOMP d) Flexibility: how easy is it to do small modifications when the spec. User feedback is essential for each following step.G.M. χ Keep the communication channels open. it is a waste of money. κ Use dedicated mini computers rather than large machines. Testengineers often say: one should save regardless the costs. Rather use programmers with design background. because their impact on the economy is as important as the ones above. This point suffers especially from the last μ squeeze. d) The system should be susceptible for small changes at the last minute. form basic electrical analysis to the generation of the numerical controltapes for maskpattern generators and testers. When the tools are not well accepted they are only expensive burdens. 1. The flow diagram of the system is given in fig. so they understand each other. A complete specification of a new approach contains more wishful thinking than realistic thoughts about tools for every day. within reasonable margins. It is a total package so it covers all designphases. An example of a cell structure is given in fig.CUSTOM LSI DESIGN ECONOMICS COMPUTER A IDED DESIGN SYSTEM FOR DIGITAL IC's IN LOCMOS Network Description PHILSIM logic simulator ac and trans analysis Generation of COMPACT LOGIC cells 219 * - INTER cellplacement & wiring wiringcapacit. 2. The cells in the cell library range from simple gates and flipflops to more com plex functions up to nine input variables. including time delay factors. The library contains about 130 items which are all characterized with respect to layout as well as electrical and logical behaviour. 1. 2 . CIRCUITMASK ï.5x16. 3 and A ) COMPQCT LOGIC CELL F=(Ci+C2)-Bl+Al-ftt lì fil ♦VP ^ ^ Bl « H2 ι—'—ι Al A» >nn. also ROM and RAM bits cells are available. The in and output signals can be reached both from the top and the bottom of the cell. (see fig.s Cl _ι α α M i c F L -NULL m ns *^νΡΊ » OXIDE . Celllibrary EësT= verification/ generation TESGEN fig. Each cell has a constant height and variable width.* ÑÜTi ' -6x20> fig. j r .nuul T tpdu B» ' >» V M ■ '·4* V " ' · " 1. 4 .LI .P r o p a g a t i o n D*lajf Doratlr.Upper ( t p d 1 ) R e f a r a n c · D 1.PO-LC-rp CJ tpdi * puLi*rø cn 2 7* ίο tg αϊ . l .4 · tpdl ■ oalaftl »■"■ 1.2 1.0.g f a c t o r Lavei Lavai ■ 1 · ° " · Γ UpdO) Rafaranca Τ _ .f Τ .2nfl fig. · Function of r.Uor. 3 fig. U .6- m Prop^f.0 tpoO . 6 and 7) . CIRCUIT DESIGN After acceptance of the development of a design the first thing to do is to con vert it into functions provided by the library and if necessary to develop spe cial building blocks. To bypass this problem. a library is developed which contains so called "primitives". AlPS contact/ buffar elementa Internal aluniniua interconnection «lenenti Diffualon contacta Polyailicon Input/ output elraents o Ρ « c o D I ♦ s . (see fig. parameter descriptions etc. 5. users are not willing to accept it as a tool.CUSTOM LSI DESIGN ECONOMICS 221 Custom designs and LSI's in general often need something special. „ w DD CD ♦ E3 s ♦ Ë * fig. With these basic items. A correlation has been established between the calculation values and the diffusion product with on accuracy of about ten per cent.satisfy the customer specification. which will not only. When all building blocks are available. If a CAD system is not capable of handling these specialties without losing its efficiency or without becoming difficult to handle. where macro facilities. . the network is coded for the computer. the desig ner is able to develop all kinds of specialties. examples of which are given in fig. are reducing the amount of work. 5. but also will fit into the rest of the system. In the latter case an analysis program is used to check the electrical properties. which are related to the single processsteps. t . r n D-LOC«Ln CMnss-CUUPLtD nLCKf* l f C l # 0 .FbH.U) Irc»".»2 8 t . f s) fl(r .s JOCiC tMSDUltlQ t ."Il 0(0) arm I(Z. M U ! f i ..U1) 0(H Dtr>0.C¿) l(Z*n. .^ .ι.*2) If/. tHSUU¿<lO f i g .2.. D(00. .A2) BF((e..b D(fJJ DtOJ U t S*'«Π IU»IU Så··!) "ÍNIJ I(i).* 1 H i r .{.H. i « i ) KLC·*·'*.X.. The design verification is done with a time dependent logic simulator.L.Μ..Ij. C l .' .on) . Ι . Ι Ι .■■-.Ftl.»Dl11Ü tH$Dni¿o £MSIJ01ÍO Olí.-· M · 1 "■■■! "Χ*·0? Irt|F.) O t r i . Μ ί .no. .HSOV2U0 fcMaOtfilO t-b". .C2.x.Z. The de signer describes the sequences of input pulses in a so called Simulation Control Language and the network response calculated by the computer is given as shown in fig.01) . M 1 oten-a* iri^.υ.C^<* In". C 2 .7/ï6. .M. Μ . . C ¿ ) Ur ι ' ^ Ί ι ..2)) Tfn. KLOMP • »20 Ζ a * * Z. DIS.IJ. F F.i»x.l»») Uf»).L) ■. .Cl ζ.ο. **<i 'lA'UZ ■ «■ »·)» ΐΓη.ΑΝ) l>rri*lO) < : ' .F).Ι ) rfjt.► i ) Dr . 10) •i*"U? I f C * ' * » ..'«LL '»•05-0¿ .â2> NOR «tNO 0{F) D(OF) OCF) f i g .MSO00O0 r . 7. . M ) ruoj oto) u(bj 0.Ih) ■Tj i .Μ. 6. U . .. ¿ u ] "Ltx'i* Ιπ·»..M.>». 1 1 .Fl.F «ACRO •«-C»J«»?««1«CC1¿C2)) '20 KC1. .IJ.C2) t>(0) nu I(»l»»2> 0(0) kun KZ.rJ. r...222 J..S P t . Ι / NAM|) :>/. ii ) liCf'X IflHiOLiI«) urj«. F í . B f.ï.*.¿t.IB.£*ι ni. ? 1 .XtΚ.IC) ■■'. . 8.> ¿¡ •Í»».) DCCK] n ( D . F ) HCI. ) "(tl*) runt)) niot] U(UU] ora.»■.åt.ti.ΙΗ) ' ! '.i : c u o tnSOUlVO Í.Ul.! «ΑΓΜΊ TUU1*» I M I ) .¿«.I NfcTjT»*» OLCS'U IfjH. t " a i í J01Ü txsogotío tHSOUOlO twSOQOSO LMSOvOoO tWSQOOíO F. if'C.io) nc«l Ulfit») LHbDtfl*)0 ENSOtllaO LHSOOloO tHSO01/O t .*h) Dr«.:.<¡") üf-)f.í í TI"£«U-3¿-« rDFFOOlQ FDr-FuQcg FOFFOOiO FDFFuOaO FOFFuOoO FOfFOOfO FOFFUOOO FOFF00VO rOFFOlOO FOFFgUO FOFFOWO FDFFOliO FOFFQ140 FDFFglSO TOuTOOlO TOuTOOiO TOuTooao TOliTUObO TUUTVOVO QIC *'.ï. i ( . η m.IH] Ur|.υ ..Ι.. . Ι ί ι .ιι.Ol) V(A1.G. i 11 r>r|2.*.Ρϋ.^ ΐ ί Λ .ftCM.Fl Z.1 Bh) -(_.e.IM.M.i?.Ql) κ .. l .» Ifr.10) •.X.nt.Χ«1> n(F) t.«£*l»-fF·« DLSCHIPIJU" .Ζ.iui ftf T # i l # 1 ' . o a.Κ. A —L·-.Χ.) O I E .»ζ Z.Ι Ι .M) Ι*^. SiO. (See fig. n. parametric tests. if not carefully handled. S¿S. lu 10 l ' J 10 1* 0» 11) η· η« η« η.)?» AM·.c. ν i m « 1 I M E Il IL Il NG Cl lu »I SI. 1* n* si. * » Ol« 0· . "'t i5. this is an important step which. which means on a standard automatic tester in a short time. 9.. aa *■ * · g·.a ¿Ou. iel. lua MCN i l c ' I OU ON Ft SN β 111 NAU AtJCDt RKII »HU. 0.). 0· 0< 0· 0. 0.. Ol« a*· η« Ol· . ¿5. 0« 1.. . S*. will cost a lot of money afterwards. how efficient the test sequence is. 1» n. 0. n* ni n» n. So for layout and testing this description is used and not an new coding with all its inherent transfer faultpossibilities.CUSTOM LSI DESIGN ECONOMICS 223 • •I PHIL31M VtHSIUu i ·«»·« t>ME=/t. ι « 10 10 lu 0) 00 0. · · ·" η. 0* 0. SOM. 001· OOlu ODIO H " · • 11·! Oliai ull·! 01101 u I 101 oito! OHOI OHOI DI 1 0 1 01101 onni 01101 . UbO.3 months to come to this point.ll¿8 7 »ι i¿. but that it can be performed on a production basis. Already during the acceptance and design phases the testengineer has taken part in the discussions to make sure that the circuit does not contain constructions which cannot be tested at all. 0. Sii". S/l.s »m ν. n» n. 1. etc. ft*** ft*** * * * * ft*** ft*** ·* ** ** * * ft*** ft*** ·*** t ·* ** ** * * * * * * •a * · a a** •a a« aa » · u·« u*· u·« 01« Ola 010 ulu 010 010 Olli **·· · *** »ft** ft*·* *·*· ft*** ***· n*** 0*** 0*** 0*·· 0*·* 0*** 0*** •a •a aa •a •a ** tu la la 1· la la 010 uto M L F H t » I L s 11. 0.υ.ι IUI im |0I .. 0. 0.· · «· •* · η ■ ■ « ι . 0. 0.1. 0« η.« «a**a n« ·· n··· 0010 001. 15. ion.STAR i MUL» ι ί ο » n» MlciH-MPíLn n. n. SM. 10. 1 "0 00 00 IUI l"l 1"! l»l 1"1 101 οίο οίο »10 nio nio nio 0 0 0 u η . ft I 11. n« 0· 0. Sí'. With these aids he is informed about the defects which can or cannot be detected. i".ut>¿<! tlMfc. o. After a process of simulating the function. 1« lo 0« 00 00 0. correcting the network where necessary and resimulating the specification is met. J'»''. Ou Λ« Γ Ι « 0« η» r. π I u 0 0 o η 0« 0. Sil'. 17. 0. ¿6. 8.ι . As it is the intention to bring this logic on the siliconwafer. 11. T'lITIALlSt INPUT SIGNAL* o.Ι nu Dn . u 1 . 0« 0« n. n. buí. η* 0« 01 «1 "1 II ι t u lu lu lu 1U lu 10 lu 10 10 1 il 010 101 100 ulu ull uoi ull U10 010 110 110 100 100 inj mi ι nt a. n. Now he has to make sure that not only testing can be done. 0. ft* #β ti·« 0· · · 0·· κ« · » * · Ol« η« · · · Ol η ■ · · · Ol« 0« * * * Ί1· 1. ni ni η* η. . ΐυ·ο ι ·*0 101· 001· OOI.. j»s. 1)·· . As LSI test equipment is not cheap. The designer has now spent about 2 . For this purpose the test engineer works with a logic test verifier which deals with logic stuck at one/zero defects and a program that generates the d.. ¿ί.»!ft n¡n 010 ΟΙΟ 0 η nio η ι Λ "J OU ft. Sil. 7.· 0« la 11 01 01 "l 01 "l "1 11 11 11 11 11 1 1 ι ι n. · » Ol« η« « * » Ol« 0« a·* 0» · · · Ol« η* Ol« η» ·» * * ·» · · a. Ν Λ ι . nobody is allowed to change the network description any more. Da n« 0· 0.ι a·· « · · I * I"· PI Ι Ol 1 " ι ni« nia 0)0 ftjn τ« η. ιι ι ηι oolu 0010 0010 ft I I ft fig. 'i '·/.).· »lu.M »M. f »IH.' ï-.FI te 11 li ITI ITI < « :· I I · IH.' s· bl kl »s •τ M ITA IT» FNI. .2 »1.» -ι .ft βΝ|.FI l i l ■Il.FI «ti *·». The placement information (see fig.FI · » (11 111 II 111 •Hl.F a«i. 12.jFic T i m I ■ · ■■ »τΕ·τ>-···ι· r i ñ e · ι s · 2 la-IO Fier- ι· FuILVI*IF" . I »H. The partitioning.M.F »»I. A fter this is accomplished we end up with two magnetic tapes.· τι TS TT τ· II ■ 1 iS IT II 1· «tl.' »IN.F >'.ri ■ '-. and the placement of the cells is done by hand. 11 •NIL')» «t ■ I 3 î " " * " t »TUCft »τ 3HF DFFIC TN« OfTECTtO IT FITO FIRIT □ i l e · •Tue« AT ICMO DfFfCT X* OtTICTC B IT FAT* " " " * " " " * * It« il« «t» Ml •If 111 ·*ιt 42 »I» »il • i l »li IÏN • MI »1 A1H.F I«Î.".» 1*4.F| 1*1.» •C.' c·. Once the logic has been designed and it has been proven that testing is not a problem.n •10. if necessary.. This enables the designer to make a final check of his logic with the actual final layout data.FI ss S' »■•.* « » «1 •s •τ 1« 10 11 IH 1Í ici III IIS 10J 1(1 IO« fig.F •I.' J.F 1-0.'l I I S 11 11 I* • ITI 1*1 If 1· 21 I I IIB ITI ITI M 2' 2« II •C.» tut.1 »2. The outcome is presented to the designer (see fig. As both the interconnection scheme (derived from the network and placement) and and the cell layout (retrieved from the library) are known. that they beat the designers' brains evidently and second. 10.G. 9. ti« I t i »•».F »IH. ) .· »Ml.' »lo.FI I M . The program calculates the contribution of the wiring to the circuit delay and this is fed back into the network.'I »l 11 1*1.· 1 »11. the layout phase can be entered.' ■1.224 J.1.F IIS. an automatic wiring routine is able to generate a double layer interconnection pattern.F ■NI.' IT* 11· 1*2 IT* · 1 1 1 no IT* 1*2 *»·. '1 H I *c.F IIN. This has two reasons In the first place the computerprograms are not yet so extremely clever.' ÌH.F IMI. Here the knowledge of the designer who has struggled with his product during the logic setup is used.F| »Μ.s »I S) Wl.F. KLOMP .) is added to the original network des cription and also stored in the computer. »ti 111 11 '». rather than to excercise with an algorithm.·■■■ * H l Î V t * I f T ifJ:« 3UTPUI . 1 1 . Ι··ΙΟ« vr. (see fig.» II 1» »ai. As all information to generate the lay out is available this is not a large problem. and as the first shot in placement never is 100S optimal a loop starts of iterative and partly interactive manma chine work. one is driving the mask pattern generator and the other is to control the automatic tester.M •*I.· »β.» 111. because of the reason just mentioned the man still has to make himself thoroughly acquainted with the computer placement to be able to do the final optimisation and this costs as much time as doing it himself right from the beginning and he is more involved. 1 CD01.I Gil SI..1 GD02.1 Gi)0b.5) fig. <»> (1) (H) (1) u 15 16 17 18 19 20 21 22 23 2u 25 26 27 28 29 30 31 32 11 t«) (H) C«1> (») («) CO (1) (H) SPAC (11 r.un PATH PATH RENO CSI..CUSTOM LSI DESIGN ECONOMICS 225 » « INPUT LISTING INTEHIONNECTIO.2 0002.1 GS SI. 2 GDI CM) CM) (21 fig.**·*«»***««■*···*«*■« TOPSTAKT HFLSTAKT ROM SPAC GD0S.2 G001.J SI. 10.2 GD06.*«·«*««««»««*«·««■·** • V69ÌV2 M E T H O K K AND C E L L P L A C E H E N T **·«··*···*««*·««*.2 SI. 11 .1 30.2 G0O5.3 GIU S1.« ..1 GOOD.2. a *·««·**««·■·«···*«*«·♦.2 G001.»»«**««··» ι 2 3 1 5 6 7 8 9 10 11 12 IS ···«·.*···*■*«**·. J 1 . The most impor tant point howewer was that the circuits met their specifications the first time. A R T .1Ι GUNL« GKKB a a a a a a a a a 1 1 1 ¿ 2 1 1 1 1 GUC GV1CI G06 G)¿ G" GSUM ti Crr. r a a a a a a a a a a a a a a a a a Gì Gum GH r.α/2.J'.ii UFLCAK'-Cl DFLLIST FOSTAWI ι GP.226 ¿si ¿52 ¿5i ¿5j ¿15 ¿56 ¿5Γ ¿SM 259 260 ¿61 ¿62 ¿65 ¿6« ¿6S ¿be.L2 I.AU GUC1 GUAI 2 6 1 1 Ι 1 1 TES c GUNA2 CUND2 Γ.ANO V .IN GINN GBr 1 Β 1 1 1 1 i 1 1 1 1 ¿ 1 1 1 1 1 1 1 1 1 CG¿ GHtSN GOA2 G.3¿/J.MO r.jo GUNC2 a a .0l«<oi*c<i¿axnj«i-o TPDJ»clI««ll«C12«»|2afn 'Cai. Of course this result was due to the fact that our cells can be reached from both ends.T GII • < a a a a a 1 Gli 1 GO«'.MB H S5.0:ÍR4 '■l. l ' l i 1 HCBO.55.% GI6 iti UU20 1 1 2 2 1 1 1 α 1 1 1 1 1 1 GG6 Γ. Dut the mantime spent was reduced with a factor of six to nine. ¿67 260 269 27o 2T| 2'¿ ¿71 ¿7u ¿75 ¿76 277 ¿76 27g ¿»j ¿"1 ¿"J 2"3 ¿a« 285 ¿fio 287 ¿«d ¿89 ¿90 291 ¿92 ¿91 J.4T r. « a » ' I 1 1 1 1 1 1 2 1 β GCL GO ιΑ Cr. Gl« GIS )». MO GuS 1 S / . i GOS ι G07N 1 ijQdN 1 GTS1 1 FOt'in 2 1 I 1 1 1 1 1 1 1 1 1 1 5 1 1 1 2 1 1 i 7 Su r.J « H l .M0 GI7 ss.Α2 Gu5 Gul GXIN G10N 02 Gulli SI. where in the cell itself no extra area is necessary to achieve this.3.i 7S HJ GUNn2 GQ12 GuA« SUA GU" GUIO CUT GCJ Gu« GU77 RSO I GTS Gl¿ S i . EVALUATION How does the system meet the original goals? HANDVERSUS MACHINE LA YOUT A couple of designs.ODI1 1 GOB2 ¿ !. made by hand. KLOMP HFLl· ι " TOF»·.i.1.11 1 1.G.6 GQJ TES'Gn¿2 GOZ5 02 (.»:» GNA Gd2UN a a a a ¿ 1 1 χ a a m a a a « a ι ι 1 1 1 1 i 1 CEI L HACHOS FrtflM L I 8 P . have been reworked with the help of the sys tem. It turned out that a designer working with the system is able to generate 0 layout which is less that 10?i larger than a complete optimized hand layout.ua GI6 S7. .u.Il DFSTA9T TPo0ac.MON Si. That is why they are called COMPACT LOGIC CELLS. 10Ί 58.12.0.U l f CüLHOl FXTfcNúEU " I T P D F . For small series one can generate a layout within three manweeks and still be within less than 20?¿ larger than an optimized layout. 12.10 GIH Gb4J « : a • 1 » a • a a . S P E C I F I C A T I ^ · ANO UPDATED fig.u7 GU76 SU FI SI.UNC GVIU GUI r.1.12 SCALtab oFt.M. For the others the first one did meet the given specification. comprizing 1500-3000 gates. So in aboiut a half a year it is possible to make parts which are correct.and testertapes is four months at a computercost of about $ 3000-4000. We only make and test the parts to their inputs.CUSTOM LSI DESIGN ECONOMICS HIT RATE 227 We have now about 150 LSI circuits designed with the system. due to a distributed capacitance over a very long polysilicon line. Both for internal and external customers an intensive course is given of one week. FUTURE TRENDS Although the system so far has a good performance. We have made several systems in this way and in the most extreme case the outside customer developed a system of 15000 gates in 12 chips by sending carddecks and receiving computer printouts. Two of them failed. FLEXIBILITY As all information about the circuits is stored in the computer small changes can be made easily. we are even not allowed to know how the system works. With respect to layout the system is very flexible. When it takes half a year before a man is able to use CAD. The coding of the rest is still valid. The second week the customers starts with his own design and some guidance is given. The typical turnaround time from accepted specification to mask. which indicates that custom LSI design is economically feasable. EDUCATION AND TRANSFER As Philips itself does not have just one design centre and as several LSI customers use our products in area's where we do not have enough expertise it was of utmost importance that the system could be learned easily. In this week all steps are exercised on examples. From the first one a part of the function was never simulated and by consequence refused to work. This works for internal and external customers. A mixed mode for high level and low level logic descriptions is therefore now under construction. After that they stand on their own feet and only regularly contacts are necessary to discuss implementation problems and especially about testing. The computer input is changed and the whole cycle can be run down with the eye just focussed on the consequences of this change. However the defects are made on gate level so we cannot forget them. but we have seen it working. because different approaches to the arrangement of the logic can be done by just exchanging the deck of cards for the placement. . Especially in the logic simulation and testverification only the gate level will take too much computertime. the benefit is already doutbtful and for outside customers it is impossible. SUMMARY lhe LOCMOS design system is in use in several centres. at a reasonable price. we realize that with the growing complexity some tools will not be adequate. The second one had a delay line effect. J. Dr.Sigda newsletter vol 6 no. KLOMP REFERENCES 1. .J. 3 1976 pp. Grossintegration . Digest of Technical papers pp. K. production's interest is in its economics. 2. 11-15.Systemen pp. Höfflinger. Herausgegeben von Prof. ACM . Wagner . 275-334. 3. Klomp: CAD for LSI.M. B. Wagner: Local oxidation of Silicon/CMOS: Technology/design system for LSI in CMOS.Technologie .G. 60-61.J. R.M. A.G. Klomp: L0CM0S-CAD ein Wirtschafliches und Produktionsgerichtetes System für den Entwurf von digitalen LSI Schaltungen.M.228 J. Strachan and K.Entwurf .G. IEEE International Solid State Conference 1974. Oldenbourg Verlag München 1978. etc. placement and routing tasks. (Figure 1) The process of gate allocation is the assignment of logic functions to physical devices. and metalization and contacts are equivalent to etch and vias. 1979 AUTOMATIC GATE ALLOCATION PLACEMENT AND ROUTING Stephen C. Placement must also conform to spatial restrictions based on design rules for thermal isolation. Placement. of the devices onto the board is done so as to make the board routable. Gate allocation in the absence of placement can only minimize the package count. This technology is likely to be of greater importance to current PCB engineers in the future. Routability is defined to be the probability that the board can be successfully completed by an automatic algorithm. Hoffman CALMA Interactive Graphic Systems Sunnyvale. they can decrease the time required for these steps in PCB design. when combined with an interactive editing capability. There is no deterministic equation that defines routability. An understanding of how the algorithms perform will aid the engineer and designer to obtain the most benefit and the least frustration from them. Automatic algorithms use a cost function to assess the routability of a placement. EAEC. California Algorithms used for automatic gate allocation. Uusgraoe. however. physical obstructions. Gate allocation is usually combined with placement since the routability of the gate allocation is dependent on the placement. © ECSC. Logic functions.G. However. The gates are assigned to devices so as to reduce the package count and to increase the routability of the board. critical signal lengths. need to be assigned to physical devices. 229 . manufacturing output can be automatically generated. Logic function templates are placed onto gate array locations. When the design has been completed. and each device is often capable of implementing a variety of logic functions. or gate arrays [3]. A very similar set of tasks is involved in the design of master slice LSI. EEC. The concepts are similar enough so that little or no modification of a PCB CAD system is needed to support gate array design. COMPUTER-AIDED DESIGN S UuemboiMa. Much of an engineer's design may specify the physical devices to be used for discrete components and higher level functions such as ALU chips or memories. Brussels o¡ digital electronic circuits and systems North-Holland Publishing Comapny editor. AUTOMATIC PROCEDURES Most of the limitations associated with automatic procedures are apparent from examining the algorithms used to automate these tasks. placement and routing do not guarantee a completed or acceptable design. INTRODUCTION Computer aided design of electronic circuits centers around a design data base that contains the engineer's logic design as input to the gate allocation. Routing is the task of interconnecting the device pins using etch' and vias. HOFFMAN REPORTS^ I.C. PARTS LIST DIGITIZE REPORTS^ 1. BLOCK UST AUTOMATIC PACKAGING β PLACEMENT NC TAPE Z^ REPORT: WIRE LIST AUTOWRAP INTEGRATED PCB DESIGN ENVIRONMENT Figure 1.DRILL 2 COMPONENT INSERTION S. .230 NC TAPES: I. NETLIST 2. it can be used to generate a variety of initial placements by varying the parameter weights for unplaced device selection and the weights in the placement cost function as well as by providing preplaced components to bias the final result. Often iterative improvement algorithms are implemented using simpler cost functions to speed the process. Although this algorithm can operate on random initial placements. The constructive initial placement algorithm has important features that affect its performance. and special attributes such as the number of connections to the board I/O pins. The process is iterative in the sense that the interchanges result in a new placement that can make previously undesirable device interchanges now desirable. The devices and gates are selected and placed one at a time. Interchange candidates are then tried. Although such a cost function is sophisticated by algorithmic standards. the total area of all nets. Running the iterative algorithm after initial placement can yield a significant improvement . and the package count. The algorithm compares the cost of placing a device in various locations and. it is greatly simplified when compared with the factors used by human designers to do placement. It also means that the resulting placement is not optimum since optimum locations for a device are often occupied by previously placed components. Parameters commonly used are total length of wire needed to connect all pins on the board. also tries allocating the gate to placed devices. Devices and gates are placed one at a time and they are not moved once they are placed. which can be hundreds of thousands and even millions of times. and the candidate. it is also a perfect companion for a constructive initial placement algorithm. Weighting factors are applied to these parameters to give a number that represents the importance of placing that gate or device next. [1] Constructive Initial Placement begins with a blank board description of where components are allowed to be placed and the list of unplaced devices and gates. Unprofitable interchange attempts are avoided by concentrating on devices and gates whose individual placement cost is high or whose optimum location is furthest from their actual location. Since this technique is relatively fast and inexpensive. Each of these parameters is then weighted for relative importance. that provided the greatest improvement in cost is used for the actual interchange. although there is considerable variation in the details of their implementation in different CAD systems. A designer can then select the best placement based on the total cost function or based on his own intuition. I will describe the general nature of each algorithm.AUTOMATIC GATE ALLOCATION. The next device or gate to be selected is based on such parameters as the number of connections to already placed components. The device locations and gate allocations are then interchanged in an attempt to lower the total cost function. Iterative improvement algorithms begin with all components placed and all gates allocated. This algorithm is much slower than the constructive initial placement due to the number of times cost functions are calculated. the distribution of expected routes. the size of the gate or device. The alternative with least cost is chosen for placing a device or allocating a gate. Automatic gate allocation and placement is performed by two basic algorithms. constructive initial placement and iterative improvement. The most important gate or device is selected and then a position is chosen for it based on the cost equation mentioned earlier. if any. PLACEMENT AND ROUTING 231 The cost function is based on parameters easily measured by the computer. The algorithm also achieves the most dramatic improvement early in the process and should eventually be stopped when the amount of improvement per CPU time invested falls below an acceptable value. This means that the algorithm is relatively fast. Constructive initial placement arrives at a good approximate placement much faster than iterative improvement. in the case of gates. This is often an oversight in the design of the placement programs as opposed to an inherent limitation in either of the algorithms. One major problem in evaluating different placement programs is that the determination of optimum routability depends strongly on the performance of the routing algorithms.232 S. A common simplifying assumption that excludes discretes is that all devices are the same size. Then the routing algorithm attempts to route each connection. Special placement design rules can be followed by automatic algorithms provided that the rules can be reduced to simple concepts such as classifying components and locations and restricting the placement of certain components to a class of locations.025 inches) are used because they match the pin spacing on DIP devices. but the routing algorithms use only orthogonal lines for creating the routes.C. The ordering function is based on measurements of the distance between points to be connected. Several important simplifying assumptions are made by routing algorithms. First the nets are put into a sequenced list of connections to be routed one at a time. Due to the inherent simplifications in the cost functions and the algorithms. although automatic placement is usually adequate and faster than manual placement. Nets . Discrete components are also poorly handled by many CAD systems. Common grid sizes of 50 mil (. Another assumption is that minimum spacing can be maintained by using a simple scheme of occupied grid points. Discrete components and large DIPs must then be placed manually. placement algorithms often outperform human designers based on simplified cost analysis. the amount of area between the points.050 inches) and 25 mil (. The sequencing of connections is an important task and the designer usually contributes to it by specifying critical nets and controlling the weights of the automatic ordering function. Using grid occupancy to insure proper clearance. while maintaining minimum spacing and adhering to design rules that restrict the use of vias or routes from certain areas of the board. Research is continuing on placement algorithms. A wide bussing trace. Some routers do have post-routing clean-up programs that can replace staircase shaped routes with non-orthogonal straight lines. Large components are usually preplaced and discretes are usually added after automatic placement when dealing with programs based on the fixed size assumption. One is that all routes are made on a grid. is run to remove unneeded vias and perform a variety of optional tasks such as realigning traces. automatic placement is seldom judged superior than human effort on small boards. it is skipped and the next connection is routed. a clean-up program. may prohibit the use of adjacent grids for routing. HOFFMAN in the total cost of the placement. where the amount of data can boggle a human designer. This may be due to other strategies employed by the human designer that are not measured by the cost function. The third step. If it is routed successfully. or replacing staircase routes with straight line connections. for example. it is removed from the list. Automatic routing can be viewed as a three-step process. connection to the I/O connector pins or other critical nets. On large boards. the bussing trace is said to occupy the adjacent grid cells even though the actual etch does not overlap the grid point. The final assumption is that routes can only be made along orthogonal paths. Complicated rules with dependent situations are beyond the scope of these algorithms. The procedures also allow the routing to make use of existing board features such as initial bus routing and fixed vias. thickening traces. Automatic routing procedures provide the capability to interconnect the nets on a PC board. if not. The performance of these algorithms is limited by some fundamental aspects of these algorithms. The algorithm can also make flooding occur faster or costlier in one axis direction than the other to give a directional bias for different layers. Many of the connections missed by a line probe algorithm can be found by the flood algorithm. Adopting design rules favorable to automatic routing performance is one of the most effective ways to increase the probability of 100% completion. Thus. small area to large area. . PLACEMENT AND ROUTING 233 should generally be routed in order from short to long. Density of IC's on the board affects routing performance dramatically for both human designers and automatic algorithms. However. There are some promising experimental algorithms that are not limited in that aspect. 100% completion still cannot be achieved reliably if at all. There has been and will continue to be research on new algorithms and algorithm improvements. There are an abundance of routing programs. However. Performance of automatic routers is also dependent on the design rules used. the probing algorithm and the flood algorithm. Blockage of adjacent channels can greatly degrade router performance. Grid size. A particular router can make use of both algorithms. It makes more sense. pad size.AUTOMATIC GATE ALLOCATION. The path it finds is also guaranteed to be the shortest or least costly path available. the algorithm can complete a high percentage of connections. routes contain more detours and eventually routing attempts begin to fail. though. Some design rules that can affect automatic routing performance are given below. Actually» the number of interconnects compared to the number of channels available for routing is a more relevant measure. The probing algorithm finds the first routes very quickly. This algorithm can complete a high percentage of the connections although it usually does not do 100% routing. this is a slow algorithm compared to the line probe algorithm. most notably graph theoretical approach' [5] and iterative conflict resolution [4]. Even so. Starting from zero. 1. they make use of two basic algorithms. The flood algorithm is guaranteed to find a path if one exists. Various modifications exist to the basic algorithm such as to limit the flooding within a window that surrounds the points to be connected. The most direct route is attempted. Particular attention should be given to the effect of vias and traces on adjacent channels. As the board becomes congested. a path may exist that will not be found by this algorithm. and of course critical nets before non-critical nets. if it is blocked by an obstacle the algorithm looks for a way around it or backs up. Theoretical analysis of routing algorithms indicates that performance drops off dramatically after a certain density of grid occupancy is reached [6]. The number of IC's (or equivalent IC's) per square inch is often used as a measure of the difficulty presented in a board design. The probing algorithm or depth first algorithm uses straight line segments to reach the target. trace width and clearance rules should be tailored to allow the most efficient use of board space based on grid occupancy representation. to use this algorithm after having used a line probe algorithm. The flood algorithm or breadth first algorithm is based on expanding a frontier from one grid to the next until the target is reached. The algorithm usually has limits on the number of probes to try or the number of detours allowed. One such aspect is that the algorithms route one net at a time and do not consider the consequences of the current path on the routability of future paths. 2. INTERACTIVE EDITING Automatic gate allocation. To optimize the complete design cycle from schematic input to manufacturing output. such as higher grid resolution to allow optimum positioning of traces or components. The number of fixed vias to provide best routing results should be somewhat less than the number of IC pins on the board. 5. to providing manufacturing output for a wide variety of numerically controlled machines. Some vendor supplied graphics systems offer automatic placement and routing software that run directly on the mini-computer. routers are capable of adhering to such rules. Due to their length. Making use of bus bars or burried power and ground planes can significantly improve automatic routing performance. It that design rules have much more successfully an interactive edit facility is automatic verification of must verify that all interconnections are complete and been followed. Although automatic procedures are not promising for someone who expects a fully automatic solution. Automation of checking tasks can be done than automation of design tasks. 4. A necessary feature of the edited design. The interactive graphics facility can make check plots and allow graphic design data to be manipulated using a tablet and CRT display. automatic procedures offer significant improvements in time needed to complete a design. from digitizing schematic drawings and capturing input for the automatic placement and routing programs. power and ground traces are significant topological barriers to routing. When evaluating this improvement it is necessary to consider all tasks needed to complete the design. Graphic edit facilities can be made available on main frame computers for main frame based routers or they can be provided by a minicomputer based interactive graphics system. In spite of these limitations. the strategy for using automatic algorithms changes when manual completion is .234 3. 6. if any. and the output of the automatic procedures needs to be edited for completion of the design and for correction of violations of design rules not automatically followed.C. For multilayer board designs. the use of fixed vias can offer significant improvement over random via placement. Regularly spaced vias insure that channel availability is optimized for routes on internal layers. A board that has matched Special consideration for power and ground. Such systems can be self-contained. Input data needs to be prepared. viewing unrouted connections. and all angle line segments to allow the maximum number of traces to pass between obstacles. Mini-based systems can operate as satelite subsystems and interface to the main frame for batch execution of placement and routing programs. X and Y channel capacity is most advantageous. HOFFMAN Board shape can also affect routing performance. but also in the ability to adapt to design rules. It can offer special display functions for identifying rules violations. and other special functions. they are attractive to someone who can generate the required input automatically from digitized schematics and can interactively edit the design at a graphics terminal. Few. Rules concerning the structure of nets such as those encountered in ECL technology can deteriorate router performance and even make the use of automatic algorithms infeasable. placement and routing is limited not only in the ability to complete the job. Percentage of routing completed is often an irrelevant indication of the usefulness of automatic procedures. Another important feature that can be offered is expanded scope of data representation. S. The designer is not restrained by the simplifications used in automatic algorithms. These features often play an important role in the perceived benefit of using automatic gate allocation placement and routing. when a manufacturing organization budgets money to design PCB's it must budget for the manual encoding of hand drawn engineering schematics if it is to use automatic algorithms. but engineering would have higher quality schematic drawings. Rather than being at the center of such a system. Design methodologies vary from one installation to another and sometimes from one project to another based on variations in budgets for design tasks. PLACEMENT AND ROUTING 235 foreseen as an inevitable task. particularly if it is combined with the use of reserved via locations. and allow the unused vias to be used in completing the design on two more layers. not only could the manufacturing organization receive error free machine readable input. This can be done by reserving certain features on the board for use by the designer in the completion process. Software for automatic gate allocation. Yet. Reserving two layers of a multilayer board can also make manual completion an easy task.AUTOMATIC GATE ALLOCATION. DESIGN METHODOLOGY There are many possible ways to configure a CAD system based on various design methodologies. the strategy is to make the manual completion task as quick and easy as possible. they are best appreciated as optional tools in the design process. The designer can then insert vias at these locations or use them for routing traces. Fixed via locations are described to the router as obstacles on all layers. placement. Another strategy for six layer boards with fixed vias is to attempt to route two layers automatically without any vias. Routing automatically on a large grid (50 mil) and using the extra channels provided by a small grid (25 mil) for manual completion is also a useful technique. and investment in CAD facilities. Another example is the method using package level schematics and an assembly drawing as input to routing programs. if engineering schematics were digitized. manual encoding is the only way to enter the' data into the CAD system. and routing should be modular enough to support these variations in methodology One configuartion might be to use automatic gate allocation and placement to provide fully automated generation of wire wrap prototype boards. Although limited in performance by the algorithms used. It is possible that both of those methodologies are followed in the same installation. Manual encoding is error prone and requires manual checking procedures. variations in company organization. these automatic procedures can provide great cost and time savings when used with design rules that favor the algorithms and the manual completion task. There are also times when the resources for digitizing schematics are not available. bypassing automatic gate allocation and production. . and routing are useful features of a CAD system. placement. However. then route two more layers automatically with vias. In that case. On a congested board. For example. too. One of the most useful features to reserve for manual completion is space for vias. a human designer will make more intelligent use of these features than an algorithm. the digitizing of engineering schematics is often perceived as an extravagant item in engineering budgets. CONCLUSION Automatic gate allocation. Rather than attempting to get the highest percentage of completion from the automatic algorithms. Vol. P. Suzuki (1974) "Master Slice LSI Computer Aided Design System". Proc. J. Hanan. Rubin. Smith II (1976) "An Analytic Technique for Router Comparison". 11th Annual DA Workshop pp. I. 19-25.C. K. J. 251-258. M. Wolff Sr. Agule (1976) "Some Experimental Results on Placement Techniques". J. C. 13th Annual D. Workshop pp. Workshop pp.. [6] D. and B. 11th Annual D. W.. . Murkani. Circuit Theory Appi.A Tutorial". Proc. pp. Proc. Hightower (1973) "The Interconnection Problem . Workshop pp. van Lier and R. 13th Annual D. and Κ. Proc. A. [5] M. 1-21. Workshop pp. 137-147. F. Otten (1973) "On the Mathematical Formulation of the Wiring Problem". M. HOFFMAN [1] M. A. Y. [2] [3] [4] D. A. (1974) "An Iterative Technique for Printed Wire Routing". 214-224. Proc. Ozawa. Wilson and R.236 REFERENCES S. A. C. J. H. 308-313. 10th Annual D. 237 . The design rule checking program uses a novel language approach which allows users to code their own design rules and significantly broadens the range of rules that can be applied. So much has the complexity of IC's increased that the paper and pencil methods applicable in the 60's no longer give the necessary speed and flexibility required by IC designers today. straightening up lines. STEVENAGE. The automatic layout program is designed for efficient layout of variable size cells in any technology and allows user interaction for further improvements in efficiency. Brussels S Luxembourg. has highlighted the need for an integrated design aid system. design rule checking and circuit function checking in addition to draughting. interactive drawing/editing. logic simulation. A typical CAD system will consist of a small computer with some disk storage. HERTS SGI 3QP The increasing complexity of integrated circuits is making it more and more difficult to design circuits manually. the system includes a program which operates on the mask data to generate a gate map of a circuit. For mask function checking. The simulator operates on a selective trace. Here the computer is expected to play a more significant role in the design process by performing such useful functions as checking of input data. It will have the capability to accept input from either an on-line or off-line digitiser. editing and mask generation functions. This. To date. coupled with growing manpower costs. next event basis and detects hazards in the time domain. EAEC.G. and perhaps a digitister and/or magnetic tape drives and/or a plotter. allow the user the use of interactive graphics techniques to modify and develop his design. COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Componi/ ©ECSC. EEC. LCOSEMORE COMPELA LIMITED COMPEDA HOUSE. Uusgraue. the conventional systems have tended to concentrate on automating the draughting process. Automatic draughting (digitisers) eased the problem a little but eventually more automation was required and so blossomed the concept of Computer Aided Design. and finally automatic driving of mask making devices. 1979 INTEGRATED CAD FOR LSI K. a visual display terminal of some sort. WALKERN ROAD. then possibly a number of programs which drive mask-making devices either on-line or off-line. Compeda's GAELIC system has been developed to provide engineers with a complete design facility including powerful automatic layout. INTRODUCTION The past decade has seen considerable advances in technology and design expertise in the field of integrated circuits. editor. This paper will discuss the detailed requirements of an integrated CAD system and examine the GAELIC approach to meeting them. . "or". 3. and basic memory types. So here we come to the designer's first problems. With the increasing use of dynamic logic the simulator needs to be able to model stored charge conditions.238 K. Because technology is moving so fast this "library" of building blocks does not remain static by any means. "don't care". "off". decoders and even blocks of memory are being used as basic building blocks. 1. An example of this is in simulation. counters. In order to get a "true" rather than imagined view of the requirements here. fall time definition for all of the gates (including those in the internal library) and the ability to vary the timing of the output trace. these being "on". The macro facility can also reduce the amount of data space needed by the simulator thus allowing larger circuits to be simulated. etc. 4. LOOSEMORE This type of system forms the hub of the design process but completely ignores some of the other functions the designer may wish to perform even though it is quite practicable for them to be carried out automatically. Shift registers. Dynamic logic capability. Because of the limitations of his CAD system he will probably have to use a different computer! There are several other samples and contexts where it would obviously help to have everything "under one roof".C designers across a broad spectrum ranging from seme very large USA semiconductor manufacturers to a few small British firms. In order to achieve (1) and also to reduce coding effort the simulator must have a macro facility. the simulator. CAD technology has responded to this problem by producing logic simulators. plus "wired or". whereas initially it was sufficient to design the circuit in terms of basic gates. Systems which supply a structured range of utilities all ultimately connected are referred to as Integrated CAD systems and this paper proposes to discuss the requirements of an integrated CAD system compared to that which is available. A minimum requirement is for flexible rise. 6. the logical make-up of the circuit. He has designed his logic and now he wants to check that it works. and "high impedance" states. There has been a gradual increase in the use of modularity in this area. encoders. the increase in complexity has meant that much higher level logic modules are being developed and used. In order to check its correctness he may choose to use one of the several commercially available circuit simulation programs. Timing Characteristics. and the built-in library of basic gates. in some level of logic. For digital circuits 4 'logic levels' are required. Basic internal set of often used gate types. 5. Because the design team is constantly needing to update its library of building blocks. there must be a way to specify and store descriptions of those logic structures which are going to be used after. If used properly the cost of simulation can also be substantially reduced. The first phase of design consists of defining. In par- . It must be library based. In considering the simulation requirement outlined in the previous paragraph it is possible to list the main requirements of our first Integrated CAD utility. several months were spent talking to I. These will include the usual run of "and". To describe the requirements of an integrated CAD system it is worthwhile to take a look at the ways in which integrated circuits are designed. The designer will begin by designing a circuit as a logic diagram. A convenient way to implement this is to allow new building blocks to be made up from compilation of both existing ones. 2. The logic levels simulated should reflect those actually used in the industry. 3) Must be technology/process independant. Secondly. Most of the currently used automatic layout programs suffer from a lack of interactive capability. three different types of output are required. Because of the rapid advances being made in the field any automatic layout module that does not have this capability will quickly find itself out of date. Here CAD. What is required is a layout program which represents true CAD in the sense that it allows the user to interact intimately with the design process as it is carried out. This is proving to be a problem. particularly when designs turn up where it is necessary to put certain modules in particular places. 4) Interaction. thus tailoring the end product more exactly to the designer's requirement. After the logic of the circuit has been validated the next step is layout.has responded with a host of automatic/interactive layout programs. Quite apart from the obvious need for a library of cells when using a standard cell approach we need. Non-Automatic Layout There are a few instances in IC design where the use of automatic layout techniques are not applicable. consisting typically of counter. First of all a set of logical modules is designed. This method is still largely used although seme use is being made of "standard cell" libraries supplied by larger manufacturers. Because of the diversity of complexity of the different cells being laid out (from individual gates up to complete memories) the standard sized cell approach has already reached its limits. So what can we expect of CAD modules to help with automatic layout? 1) Must be library based. shift register and blocks of similar logic. Because of the regular structure of this type of chip it is often more efficient in terms of manmonths and final size of the design for the design to be carried out "manually". Historically the logic modules have been input by making a drawing on paper then digitising it into seme sort of computer data base. . The second phase of layout is concerned with performing physical interconnections between the blocks. (a) (b) (c) specified times specified intervals on certain events (e. For this purpose the designer requires seme sort of computerised drawing board. Many of these in the past have been standard cell based but the need has been realised for something more flexible. A newer approach not fully accepted yet is to design logic modules using a 'Stick' diagram (1) which allows seme degree of technology indépendance.INTEGRATED CAD FOR LSI ticular. In particular 'memory design'. as with the simulator. the ability to build up a library of custom designed modules for use in later designs. 2) Must be able to handle any size of cells. these blocks are placed on a chip and the interconnectors between them are routed. For any reasonable size circuit this takes place in two stages.g a gate changing value) 239 I have deliberately left input to the simulator for discussion later as this should be regarded as an input to the integrated system rather than as input to one module. or rather Design Automation. to be producable on whatever process is being used. Because of the way in which these modules are used in a design it would seem that a linear system of use is reasonable. worth going into much detail about their requirements. therefore. check plots are going to be needed together with some archiving capability so that successful design effort can be re-used for later designs. device by device. while not being fully appreciated now. Because of the limitations with processing of silicon. the output frem automatic layout would be input to a functional checking module and so on. This restriction on applicability is understandable since IC manufacturers tend to be a secretive lot and. This type of system has the advantage that it imposes a certain degree of regimentation on the designer and acts as an aid to design management. where the interface files between the various modules are exactly the same. will have to become a must for inclusion in any integrated system. LOOSEMORE This utility. Historically. In this type of system a design spec. Assuming the layout is now completed what further work is required before sending the data to an IC manufacturing plant? There are two main checks which are performed at this stage. etc. This has the effect of a star network of utilities centred about a design database which binds the whole system together. This has meant that of the many CAD systems about. separations. This fact stems from the need to run the design through the modules serially thus making sure that stage is finished before the next one starts. would be input to an automatic layout module.240 K. Also. This is called function checking and is a new concept in CAD. most of them geared towards a particular manufacturer's environment. very few have an integrated design rule checker. this has been carried out by the somewhat laborious process of having a few designers/draughtsmen scan over the artwork. polygons etc actually represent the circuit intended. There are a nunber of programs available at the moment. and in problems like misalignment of masks. trying to spot the errors. like to play "ours is better than yours". Design Rule Checking The second main check to be performed at this stage is called Design Rule Checking. The next question must be "How do they fit together?". I suspect. There is now a capability to perform this check automatically and I feel that both this and automatic layout. e.g. So the integrated system needs to have preferably a few different post processor giving the capability of running on several different devices. Because of this central position . From this type of system arose the structures of modern integrated systems. An Integrated System So far I have discussed the main modules that can be expected to exist in an integrated CAD system. with each other. Such graphic editing systems are well known and it is not. because of its broad application to all phases of design will normally stand at the hub of an integrated system and usually consists of an interactive graphics editor using either storage or refresh display and usually coupled to seme sort of digitising capability. pattern generators. circuits have to be designed to certain minimiin widths. First of all a check is performed to make sure the rectangles. When the checks are complete it is time to generate control tapes for some mask making device. 241 2. It is also used as a 'first instance' hook for individual users to interface their own software to the GAELIC system. Input to the system is via a number of routes. The resulting system is one that appears to have more flexibility of use than any other. Function Checking and Design Rule Checking. The GAELIC system consists of a central hub including a 2J dimensional graphic editor working directly on the database. which require information other than graphical the system also inputs a language which is compiled directly into the database and which can also be generated frem the database. Simulation. This tends to reduce its machine indépendance so the readable language concept has been used with advantage as a completely mac h ine independant design definition. These utilities are connected together by a database which at the time of its design.INTEGRATED CAD FOR LSI in the system the design of the database is very important. Because of the rate at which complexity is increasing in LSI designs certainly a million polygons are not far away and the way is open for quite clever storage schemes to manage the amount of computer data that this implies. It must be able to structure the design (autcmatically) as a number of separate areas so that when access is required to only a small part of the design. The GAELIC System In order to illustrate an integrated like to take as an example Ccmpeda's over several years from an idea by a Edinburgh and has successfully moved CAD system and how it is used I would GAELIC system which has been developed research worker at the University of with the times. What are the requirements of a database for an integrated CAD system? 1. These should be capable of a large depth of nesting rather than a depth of 2 or 3 allowed by some of todays' systems. For instance a utility to generate pattern generator tapes needs to access the database in a mask-by-mask basis. it needs to be implemented in the most efficient way on each machine on which GAELIC is mounted. only that area needs to be accessed rather than the whole database. This is particularly important when using a graphic editor since it speeds up response time by limiting the amount of database accessing required. Because of the automatic design aids. It must attempt to reduce the amount of data stored as an absolute minimum. 3. Circuits can be digitised using an off-line digitiser (of which 2 standard ones are supported at the moment) or input from other CAD systems. represented a breakthrough in data storage for LSI. Around this are provided several utilities concerned with mass input/output with the system and a number of powerful utilities including Automatic Layout. 4. One of the ways of achieving (3) and also tailoring the database to the design process is to allow the definition of sub designs of which instances may appear several times on the chip. Because of the nature of the job carried out by the database. . It must be accessible in a variety of ways in an efficient manner. whereas a function checker or dimension checker needs to access it polygon by polygon across masks but in the same general area. The system contains enough parameterisation to allow the auto layout module to be technology and process independant. not restricted to design output from the automatic layout module but is sufficiently flexible in terms of how the circuit is constructed to allow great latitude in the constraints it puts on the designer. David Mann. The two main checking programs comprise an automatic function checker and an automatic design rule checker. and Electromask pattern generators and also to output a design to stand alone design system. This means that even very loosely constrained manual layouts are applicable to analysis using this technique. It incorporates a wide variety of built in gates and memory devices and provides facilities for macro devices and complex gates. Williams Master's Thesis MIT. a new approach to LSI design. Cells are defined and archived using any of the methods available for input to the GAELIC system and this together with a circuit definition is processed either autcmatically or interactively to generate a set of masks for the circuit. The designer codes up in tabular form a set of custom rules which are compiled into a set of programming language routines. In between the input and ouput utilities stand the automatic design aids. (2) IC Design . Output is to the GAELIC database allowing several iterations round the loop to be performed so that the auto layout program can be used for sub designs as well as for a complete chip. References (1) STICKS. J. The automatic layout module is unique in that it is the only commercially available one allowing variable sized cells. The logic simulator is applicable to both static and dynamic logic and both combinational and sequential circuits. time spent at the terminal is of the order of half an hour per run thus allowing the designer to run the program several times exxerimenting with different layouts. But it does represent what is possibly the most advanced integrated CAD system available today.242 K. June 1977. These are further compiled together with a piece of standard code to produce a custom program whose speed of operation stems from cutting down on generality and tailoring the program particularly to one set of rules.lH/ö- . As well as utilities to drive no less than four different plotters. The function checker works from a set of mask data held in the database and together with some extra information supplied by the designer outputs what is. This program is.J. of course.D. the system shows"a whole cross section of the use it has had over the years supplying the means to drive Ferranti Master Plotter in both modes. Loosemore. a cirucit diagram of the design. LOOSEMORE The outputs from GAELIC are also varied. effectively. NU. K. Finally the fourth main component of the system is a completely new type of Design Rule Checker embodying a procedural approach for efficiency and flexibility. The output is in the form of a trace of both selected inputs and outputs together with 'spikes' and 'glitches' that may be detected. This approach when embedded in a computer operating system has been found to help with design management considerably. GYREX. The GAELIC system contains such a large range of facilities that it is impossible to cover in detail all of them here. Because of the novel approach adopted by this program.Misery and Magic. BAIN DEFINITION Π I AREA Access by layer (or mask) Access by physical locality Reflects design structure Minimises filestore accesses Compactly .stores design data GAELIC DATABASE SYSTEM . GAELIC LANGUAGE GRAPHICS DIGITISORS EDITOR OTHER DESIGN SYSTEMS AUTO IC LAYOUT GAELIC VLSI DESIGN SYSTEM FUNCTION CHECKING DIMENSION CHECKING ARCHIVING ELECTRON BEAM SYSTEMS PATTERN GENERATORS CHECK PLOTTERS OTHER DESIGN SYSTEMS GAELIC VLSI DESIGN SYSTEM OVERVIEW . European communities .E.E. PROJECT SESSION Chairman: E.C. DE MARI. . to name but a few. concerns such a technical and economical feasibility study (user-oriented) undertaken by the Commission of the European Communities on computer aids. Computer aids. and will be probably more so In the near future. R e s u l t s are d e s c r i b e d In t h r e e papers f o l l o w i n g t h i s i n t r o d u c t o r y p r e s e n t a t i o n . by the dramatic evolution of technology towards higher levels of component Integration and complexity. since new significant computer-aided integration design packages call for accurate planning and substantial investments (up to millions of European Communities Units of Account) a proceeding feasibility study becomes indispensable. Subsystems with increasing inherent intelligence allowing the most diverse and self-contained functions are manufactured on single chip devices. and mask design problems on the other. Bruneis oi digital electronic circuiti and systems North-Holland Publishing Company S Luxembourg. device studies at the physics level on one side.o f . Because of limited resources. *on part time secondment from FIAT-TEKSID 247 .G. have been left outside the scope of the project. sponsored by t h e Commission o f t h e European Communities. performance prediction and design for testability. and c o m p r i s i n g NIXDORF. often unsuccessful. altering in a major degree the design process needed to obtain tractable and fully controllable products. methodology and user environment. to cover ever Increasing requirements In terms of modelling accuracy.a r t t e c h n i q u e s . have been stretched to the utmost of their capabilities In the attempt.t h e . I t was completed i n October 1978. editor. The ultimate objective of the study was the definition of recommendations for cooperative development actions. De Marl * The Commission of the European Communities ABSTRACT T h i s paper I n t r o d u c e s t h e s a l i e n t f e a t u r e s o f a f e a s i b i l i t y study on computeraided design of d i g i t a l c i r c u i t s and systems. C ECSC. SEMA. If they appeared to be beneficial and desirable. requirements and p o s s i b i l i t i e s of f u r t h e r development w i t h i n t h e Member S t a t e s . developed In the past years. On the other hand. performance prediction and testability. reported herein. w i t h t h e c o n s u l t a n c y of BRUNEL UNIVERSITY. COMPUTER-AIDED DESIGN 1979 EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS INTRODUCTION A. EAEC. Specific themes covered range from quantitative system conceptualisation and description to modelling (from basic logic elements to computer components). EEC. PLESSEY. within the European Communities' Member States. The Study was awarded by t h e Commission i n mid-1977 t o an I n t e r n a t i o n a l cons o r t i u m led by SAGET. w i t h t h e aim of assessing s t a t e . Uusgraoe. INTRODUCTION Logic circuit design has been heavily Influenced in recent years. The project. 1975. b. the opinion Indicated that the responsibility for the implementation of such projects reside largely within the data processing industry in the Community. evaluation of survey data.10) comprising a study of developments in computeraided design. among other initiatives. through an exhaustive survey of the current state-of-the-art of computer aids. of evolution in technology. thus paving the way for concrete actions to be proposed by the Commission in the broad data processing area.7. C86.1975.I). conclusions and recommendation for further work (Analysis Task). BACKGROUND The Counci I of the European Communities approved on 15th July 1974. On 13th March 1975. the Economic and Social Committee drew up its opinion on the communication from the Commission to the Council concerning initial proposals for priority projects on data processing (OJ No. C99. 2. and user requirements (Survey Task). Moreover. p. On l8/29th May 1975. p. that the proposed priority projects could make a useful contribution to the Community policy on data processing. p. a Resolution on a Community policy on data processing (OJ No. the Commission's intention to submit priority proposals concerning a limited number of joint prospects of European interest in the field of data processing applications and the promotion of data processing applications and of Industrial development projects on areas of common interest involving transnational cooperation. assessment. . C263. among other comments. project structure and summarised content. DE MARI The project comprised two parts: a.11. Such opinion stated. the Commission submitted to the Council a proposal for a Council Decision adopting a number of draft projects on data processing (OJ No. The Resolution was based on the awareness of: the Importance of data processing for the economic and technological position on the Community in the world the unbalance of data processing Industry in the world and unsatisfactory level of applications within the Community the effectiveness of competition and the need to encourage Europeanbased companies to become more competitive and the conviction that: both companies controlled from outside the Community Member States and European companies can coexist and prosper in an expanding market a more effective use of resources is obtainable through cooperation and joint actions in suitable fields. identification of problem areas. main objectives.248 A.5. The Resolution welcomed. 17.20. The proposal included basic motivations.44).1974. design methodology. 1977.10. the approval of the Commission's proposed choice of projects in the field of data processing. and the project leader for the CAD Electronics Study was selected in November 1976. In particular. A call for tenders was published the 1st February 1977 (OJ No. delivered 8th February.1975. assisted by the technical sub-committee (thereafter "Technical Committee"). The Advisory Committee was charged with specific duties which included the choice of Commission project leaders. also on the ground of recognised priority for those projects. maximum budget available for the tender was indicated as 210.1976. p. 16. indication on the procedure to be followed for evaluation of bids and brief description of the project with duration (12 months) and estimated level of effort (total of 42-58 manmonths). 1. In this task. availability of Invitation to Tender Document (8th February). C24. 20. In addition. and approved the appropriations necessary for carrying out the projects within the budgets of the European Communities. the Council decided to adopt a series of three joint data processing projects (OJ No. C239. On 22nd July 1976. The Invitation to Tender Document. The Committee was thus set up for the specific purpose of assisting projects adopted in the above Decision. to start work on the preparatory chores of the project the 1st December. provision was made within the Decision for a project director (or leader). with the essential information on the tender action regarding duration (six weeks. therefore.II). p. . whose duties were to assist the Commission In the execution of all data processing projects.16). as the first specific practical measures to be taken with a view to establishing Community data processing policy. for the operational task of carrying out each of the projects adopted. comprised the technical specifications and work statement for the study.EUROPEAN COMMUNITIES STUDY: INTRODUCTION 249 On 23rd September. closing date 18th March). Each project leader was to report directly to the Advisory Committee. TENDER ACTION Technical specifications and work statements for the CAD Electronics Study were prepared by the project leader during the months of December 1976 and January 1977 with the assistance and advice from the Technical Committee. evaluation criteria. among other items. consisting of one or more technical experts per Member State. p. the Commission is assisted by an Advisory Committee composed of representatives of the Member States.000 Accounting Units. open briefing for all potential tenderers (23rd February). Such resolution included.2. The organisational and operational structure. likely to help to meet the needs of users and to Increase the ability of the Europeanbased data processing industry to satisfy these needs on the European and world markets. including the study in computeraided design of digital electronic circuits (thereafter called "CAD Electronics Study").23). The Commission was entrusted for Implementing the projects. L223. improved computer-aided design techniques were considered necessary to contribute to the strength of the European electronics industry. was centred for each project on one or more outside organisations selected as contractors to perform the actual work supervised by a Commission project leader.8. a Resolution was deliberated by the European Parliament concerning its opinion on the communication from the Commission of the European Communities to the Council containing initial proposals for priority projects in data processing (OJ No. The Council motivated the decision. 1975. and the composition and responsibilities of the technical subcommittees. Advisory and Technical Committee were set up in the Fall 1976. assisted and advised by a technical subcommittee. the choice of the organisations to which the work was to be entrusted. conditions for presentation of tender. administrative and contract conditions. STUDY SPECIFICATIONS A brief synopsis of the technical specifications for the study. d. c. industrial. and the consultancy of BRUNEL University (United Kingdom). App. Description Languages Hardware description languages. as tools for describing circuit structures. managerial. thus obtaining a competent and objective coverage of the various parts of the tenders: technical. The offer presented by a consortium. impact of technology evolution. which included interviews of the tendering teams. SEMA (France). the Commission awarded the contract to SAGET in June 1977. On 2nd May the Advisory Committee was also unanimous in supporting the recommendation of the Evaluation Group.I) are summarised below. application dependency. a selection of computer-aided design aspects of logic circuits. technology dependency. I |A Technical Breakdown: Specific Topics The topics listed below represent. Numerical gradings were assigned by the Evaluation Group to a number of detailed aspects of each part of the tenders.r. Time projection of designers opportunities and requirements within an extrapolated electronics and computer evolution in the 1979-82 period. adopted earlier by the Advisory Committee. if appropriate. DE MARI Following the close of the tender action.250 A. with detailed justifications. was unanimously recommended by the Evaluation Group as the front-runner being fully capable of executing the study. The consortium included NIXDORF (West Germany. and were subsequently merged through a predefined weighting algorithm to reach a final quantitative assessment of each bid. with a description languagelis referred to as synthesis. PLESSEY (United Kingdom). Synthesi s The process leading to a detailed logic design from the functional behaviour of the system quantitatively defined (e. I Technical Objectives a. Recommendations for further Community work. taking into account developments elsewhere (e. Availability. PMS. Assessment of current state-of-the-art of computer-aided logic circuit design. subject to some negotiations of minor points. and economic benefit) for further development projects within the EEC Member States. language specialisation. scientific. Topics relevant for the investigation include degree of diffusion and adequacy in the industrial environment.2. administrative. USA and Japan). members of the Technical Committee. 2. problem areas. level of conceptual abstraction (algorithmic. user requirements. Internal and external functional relationships and algorithms. appeared in the Invitation to Tender Document (ITT No.g. 8. a systematic and thorough tender evaluation procedure. requirements . register transfer. or in general product specification means. 1. indications of cost benefits. Such Group Included the Project Leader. at the completion of the necessary negotiations with the tenderer of technical and contractual details. (Luxembourg). Investigation of the opportunity (in terms of strategic.). b. T/3/77. complemented by qualitative appreciations and judgements.g. etc.l. as backbone to the study. are considered. With regard to such recommendations. for the present purposes.1977. was performed during the month of April by an Evaluation Group. made on the basis of estimated highest returns and consistency with resources available.a. led by SAGET S. independent experts from within and outside the Commission. ) with emphasis on macrologlccomponents (address. are of particular relevance: a. asynchronous. Three aspects.. highly related to technology evolution.. etc. model standardisation for different components or technologies. b. ease of usage. formalisation and computer aids to assist the designer in such a delicate process (especially for the higher levels of complexity of modern practical systems) represent an Important topic of investigation of the study. registers. Integration with related procedures Organisational. selective trace). on several hierarchical levels. combinational.EUROPEAN COMMUNITIES STUDY: INTRODUCTION 251 and possibilities of conceiving general strategy. ¡nertial delays. system architecture. hazards. usually obtained with circuit simulation software packages. Is an essential tool for design verification including logic verification. In view of the ever increasing circuit complexity.. c. considered relevant to the study. concerning the separation of the system into modules. ) to computer components (processor. initialisation strategy. Testab i I i ty As circuit complexity Increases. ). are objects of the investigation. 3. memory I/O control. Characteristics of computer tools to be examined Include fault types. Other pertinent aspects comprise input description techniques (pre-processors).. fault modelling. Topics of relevance to the study Include delay modelling (zero and unit delay. as a calibration of the degree of involvement of software predicted with quantitative tools rather than influenced by qualitative attitudes. Methodology The entire range of specific topics (IIA) should be subject to a methodology investigation to devise a set of procedural ground rules for all specific Items and their relative ¡nterdependencies to achieve an overall design methodology. oscillations) deserves the major emphasis. Performance Prediction Circuit performance prediction.. The treatment of error conditions (signal spikes.). sequential). next event. I IB Technical Breakdown: General Topics Technical aspects of general nature. The techniques adopted. testing becomes more and more a vital problem to be tackled at the very Inception of the design stage. 4. main internal driving mechanism (time flow algorithm: fixed-time increment. are listed below. component families considered within the scope of the present study range from logic components (gates. hardware/software trade-offs. Component Model I ing Components may be defined as primitive devices which may be interconnected for implementing a specific application system. concerning the distribution of resources and the internal organisation of the system. costeffectiveness. associated to the inherent limitations of computer hardware/software tools. Design strategy and verification of circuit testability require computer tools for two main purposes: test pattern generation and fault simulation. timing.. races. drives. according to the typical opportunities or constraints the designer is faced with.) and microcomputer components (arithmetic. 2. logic value multiplicity. logic specialisation (synchronous. 5. partitioning. 1.. assignable gate delays. parallel fault and deductive fault simulation for the latter. hardware and software requirements must be satisfied to interface collateral and . data structure efficiency. methodology. strongly linked to two or more specific topics of section IIA.. such as D-algorlthm and Boolean differences for the former purpose. depicting technical specifications of the products. remote access. modularity. solutions. design methodology and computer resources. Software implementation Key issues In software management and implementation Include language. a first-survey iteration. Analysis Task The Analysis Task comprises firstly a critical evaluation of the data gathered and ordered in the Survey Task so as to expose technical inadequacies of current computer aids. Computer resources The overall hardware configuration of the computer resources running the application software package represents an important issue. 5. I. problem areas. . user requirements. The Analysis Task includes several Work Packages concerning pre-analysis. USA and Japan. etc. economical and organisational nature on the basis of results established during the previous Activity. input/output organisation. assembly. In case motivations emerge for further action. are a matter of investigation. documentation. More detailed information on the Technical Specifications of the Study is availabIe in Ref. also among the small design outfits and most diversified user groups. some of them being automated. in-house versus service bureau operation. organisation. organisational environments. data ordering. to define the profile of the users population potentially served by such future developments. Man-machine interface Acceptance by the users community of a computer aid deeply involved In the design process is strongly conditioned to a well treated human engineering problem. to identify a cost/benefit judgement in every specific area. Such conclusions should take into account time projections and extrapolations to predict the impact of changing technology and environment during the period 1979-82. evaluation. documentation. The necessary prerequisites for a successful package. budgeting. The level of Integration and interfacing problems to such processes. this task draws detailed final conclusions of technical. The survey comprises all specific and general topics listed above (Section II). schedules. system reactivity. operational matters. data integration. batch processing support. with particular attention to the EEC Member States. EEC states opportunities. overall costs. problem areas. Properties of specific interest include on-line multi-user configurations. preliminary guidelines. 4. portability. I I I Work Breakdown The work breakdown structure comprising several Tasks and Work Packages is synthetically summarised below. available cost/benefit values. B. if appropriate. Survey Task An exploratory survey of the current state-of-the-art of computer-aided design of logic circuits and systems should be pei— formed in Europe. this Activity requires the preparation of a proposal for follow-on development projects. a second survey iteration. Secondly. recommendations and action ρ Ian. A. and finally to draw broad guidelines for a follow-on development project within the EEC scene.252 A. graphics and special terminal capability. programme and data layout. benefits with particular regard to EEC Member States opportunities. are particularly relevant to the study. testing. DE MARI downstream processes (production. impact of technology change.). The Survey Task includes several Work Packages concerning standards. 3. Italy. 3. Carter. technical details and study results are reported In the following papers (ref. G. Organisation. 2.P. The continuing Interest and essential contribution of Mr. Blr. Thus the Analysis Task was completed July 1978 and reports were available October 1978. REFERENCES 1. November 1978. The slippage. are highly appreciated. "European Communities Study on CAD of Digi ta I'Circuits and Systems . Musgrave. Qui I lin.EUROPEAN COMMUNITIES STUDY: INTRODUCTION STUDY EXECUTION 253 Work by the multinational consortium started in July 1977.Survey in USA and Canada". was absorbed by the contractor through additional overlap of the Analysis Task and by prolonging the study period by one month. A. also caused by the coincidence of holiday periods with the peak of the two-step Iteration. Symposium on ComputerAlded Design of Digital Electronic Circuits and Systems. De Mari. Throughout the entire study execution monitoring was performed by the Commission at monthly Intervals and constructive interactions with the Technical and Advisory Committees were assured to provide the appropriate guidelines on emphasis and resource allocations. 3. Brussels. Proceedings. 4.Organisational Aspects". E. "European Communities Study on CAD of Digital Circuits and Systems . Head of the Joint D. and particularly the dedication of Mr.TechnicaIPerspective". S. Bologna. Project Bureau of the Commission. Belgium. W. 4 ) . September 1978. The Survey Task was successfully completed early March 1978 with approximately two months discrepancy with the original schedule. ibid. "European Community Study on CAD of Digital Circuits and Systems". Qui I lin. . SAGET Project Manager. and of Mr. International Conference on Interactive Techniques in Computer-Aided Design. W. ibid. 2. Proceedings. G. A. Musgrave of Brunei University are fully acknowledged. "European Communities Study on CAD of Digital Circuits and Systems . Finally the commitment of the Project Team. ACKNOWLEDGEMENTS The author gratefully acknowledges the competent assistance and advice of the Members of the Technical Committee throughout the initiation and execution of the project. . financial and contractual portions of the Invitation to Tender. of Irvine. assistance and advice to the study was offered by Plessey Telecommunications. UK Plessey Central Research Establishment. editor. In early February 1977 the Invitation to Tender for the CAD Electronics Study. it was necessary to form a suitable European Consortium. California with regard to arrangements for United States interviews. Uusgraoe. UK Expertise in Technological Surveys Major European Computer Manfacturers Large Electronic Company using CAD techniques LSI Design and Development CAD Research Group The first three companies named were employed by SAGET as subcontractors. 1979 EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS ORGANISATIONAL ASPECTS W. UK Brunei University. COMPUTER-AIDED DESIGN oí digital electronic circuits and systems North-Holland Publishing Company ©ECSC.l. In addition to these organisations. the consortium which was organised by SAGET was comprised of the following organi sati ons:Sema.E. for detailed analysis of the data collected and formulation of CAD Business Plans. showing how this multinational study was conducted. to include organisations from every member nation in a relatively small study contract. The organisational details of these two stages are given. Plessey and other European organisations. West Germany Plessey Radar. an excellent coverage of the various CAD fields had been achieved. Luxembourg. the latter two organisations as consultants to the project. and contracts which already existed between SAGET. and preparation of a bid for the study commenced. managerial. 1 Introduction to the SAGET Study Team 1. the first a Survey Task for data collection and the second the Analysis Task. with regard to the Telecommunications aspects of CAD and Plessey Microsystems Inc. with these organisations in the bidding team. of course.r. EAEC.a. It was not possible. and also a good geographical spread amongst EEC-member nations had been arranged. Assisted by personal contacts. In the limited time before the return date of 21st March. EEC. Abstract The study had two stages. France Nixdorf. but the organisations included 255 . Tender Number T/3/77 was received by SAGET. problems overcome and results produced within the required timescales. It was considered that. Quill in CAD Electronics Project Manager SAGET (Luxembourg) S.G. as well as giving full consideration to the technical.1 Response to the Commission of the European Communities' Invitation to Tender. Brussels S Luxembourg. As the contract progressed. followed by ordering of the collected data. A and C had supplementary parts to be completed to give details of packages in use or provided. The contract had two major tasks. in CAD for Digital Electronics. and to collect details of Synthesis. The questionnaires were designed by using the technical survey expertise of SEMA together with the detailed CAD knowledge of the other members of the SAGET Consortium. The questionnaire was agreed at the beginning of August. M and T. (a) First Survey Iteration The First Survey was given a formal structure. the interviewer was provided with additional questionnaires S. these Pilot interviews did not result in any major changes being required and they showed there was no linguistic problem in having the questionnaires in English. with the timescale and financial constriants. Modelling and Test Pattern Generation. In fact. showing how best the Commission could assist the development and use of CAD techniques. the Analysis Task was to analyse the collected results and produce Business Plans. by providing interviewers with questionnaires. QUILLIN in the SAGET bid had sufficient technical experience. With assistance from the members of the Consortium the SAGET bid was prepared and delivered to the Commission by the required date. the contract was offered to SAGET. Three different but inter-related questionnaires were used:Questionnaire A Questionnaire Β Questionnaire C CAD Users CAD Non-users CAD Suppliers In addition to the main questionnaires. on a world-wide basis.1 The Survey Task The Survey Task was itself divided into two activities. more detailed survey of a smaller number of organisations . of relevant organisations. . after having been tried out in Pilot interviews in UK.E. a first survey to cover as large a number as possible. The first was a Survey Task to collect data from organisations active. Preparatory work started in June 1977 during the Contract Negotiation Phase and the contract was fully agreed at the beginning of July 1977. 2. The second task. France and Germany. personal contacts and linguistic abilities to cover all relevant establishments in the member nations.the ones which had been shown by the results of the first survey as having most to contribute to the project. 1977. the organisation of this coverage was considerably assisted by the members of the Commission's Technical Committee for this CAD Electronics project and the Commission's Project Leader.256 W. or potentially active. on a world-wide basis. Following the Commission's adjudication phase. 2 TECHNICAL ORGANISATION OF CONTRACT The activities proposed for the contract in the SAGET bid were similar in most respects to the activities which had been given by the Commission in their Invitation to Tender. and a second. and it was not until October that it was possible to have interview schedules and agreements for interviews in a suitable form to permit these to start. who gave considerable support to the arrangement and scheduling of the interviews. Brunei University -Sema Nixdorf Brunei University United States. and there were very few refusals for company security reasons. 85 organisations were eventually visited. A very high degree of co-operation was found from establishments visited. except on the few occasions this was requested. enabling the correct personnel to be identified. together with introductory letters from the Commission and SAGET were provided. and this also permitted a discussion of the project giving its aims and organisations to prospective interviewees. because of long-established links between SAGET and the Oki Company in Japan. and the other covering the West coast.EUROPEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS The survey in Europe was started. this proved a much better and more flexible approach than letters or telex contact. it was considered that the sending of such a comprehensive quetionnaire could have proved off-putting to the establishments. within the Project's constraints. Benelux Germany. Oki were able to arrange assistance from the CAM Committee of Japan (Computer Aided Manufacture). but to ensure as good coverage as possible of relevant organisations. and eight establishments were interviewed. Interviews were scheduled to last one day. but where interviews of special interest were concerned. the interviews were conducted by one person. as had been planned. in August and the majority of interviews were completed by mid-October 1977. one covering the South-East. but. The interviews were divided amongst members of the SAGET Consortium along the following lines:United Kingdom France. The questionnaire was not sent to establishments before interview. Prior to interview a summary of the required details. Altogether 20 establishments were interviewed in the USA and Canada. . Initial contacts were made with organisations by telephone. if they were not already known. The United States interviews were divided between two interviewers. Canada Japan Plessey Radar SAGET It has been planned to interview 67 organisations on the first iteration. where holiday arrangements permitted. and Eastern states. In general. Interviews in Japan were difficult to organise from Europe. Italy. plus an interview in Canada. two interviewers were used where possible. SAGET would like to express its thanks to Oki and the CAM Committee of Japan for this support. 257 The detailed interview schedules required for the interviews in the United States proved more difficult to arrange. The Japanese interviews took place at the beginning of November 1977. Denmark Sweden Plessey Radar. points which may have needed amplification from the first survey. but interviewers' notes for guidance were prepared. continuing with totalisation of answers and comments collected on this first survey. Names of all personnel to contact in the organisations were known from the first iteration.258 W. a total of 23 organisations had been surveyed. The first iteration results showed that. Scheduling the second iteration interviews was more difficult than the first iteration. and the benchmark tests. 2. giving as much data as possible for the Analysis Task. The original project plan had been to cover 20 organisations in this second survey. despite considerable difficulties caused by extremely bad weather in the North-Eastern United States. All the planned visits were made. These concentrated on two factors. due to the need for computer access and technical assistance for the running of benchmark tests. backed up by Tetters of introduction from the Commission and SAGET. The identification of establishments for the second survey was made in the light of the pre-analysis of the first survey data. once again. . due to the structure and company security of Japanese industry. design of the second iteration benchmark tests was started by Brunei University. and in conjunction with the Technical Committee and the Commission's Project Leader. These contacts were first made. this part of the Survey Task was overlapped as much as possible with the start of the Analysis Task. The second survey did not have a formal questionnaire structure as did the first. there would be little to be gained from a second iteration visit to Japan. For the second survey in USA. starting with the Pre-Analysis of the first survey data. Interviews on the second iteration were tailor-made for the establishments being interviewed. Hence there was much more overlap of the Survey and Analysis Tasks than had been planned at the start of the study. Also. To avoid this causing excessive slip to the overall project timescales. QUILLIN All the interviews were completed by mid-November and a preanalysis of the data collected was commenced to enable locations to be identified for the second survey iteration. by telephone. Hence visits were concentrated in Europe and USA. (b) Second Survey Iteration Pre-analysis of the first survey data and design of the benchmarks was completed by the end of December 1977 and arrangements for the second survey were made so that interviews could commence as soon as possible after the Christmas holiday period.2 The Analysis Task The Analysis Task was commenced as soon as possible during the Survey Task. It took a longer time period to complete all interviews than had been expected. as these became available.E. and interviews were conducted in general by two people and lasted about two days. and continuing with a detailed analysis of the second iteration benchmark results. By the end of the Survey Task. a team of two people from SAGET and Brunei University visited the selected organisations in February. and all European interviews were not finished until the start of April 1978. within agreed technical guidelines and timescales. Video tape lectures and demonstrations. not being influenced by factors outside the control of members of the SAGET Consortium. discussing both the needs of CAD this evolution brought about. Item Β to be supported by a training programme which would be established by seconded leading experts in the field to provide:(1) (2) (3) (4) (5) Program Syllabus. followed by details of three proposed Business Plans. To establish. project timescale scheduling during the Analysis Task was much easier. and during May the preparation of the final report started. To influence constituent governments. 3 EEC OPPORTUNITIES 259 The following is a list of opportunities which were identified for the Commission together with comments on these:A. throughout the member nations. often within industrial companies. portions of this being allocated to the various members of the Consortium for initial writing. for them in turn to influence certain education and training courses to reflect the impact of the 'digital revolution'. as the Survey Phase Report). Be reponsible for program of 'Workshops' held throughout Europe. and the benefits to CAD of this evolution. organisations responsible for retraining of electronic engineers. prior to final integration. Compared with the Survey Task. Lecture Notes. These Business Plans carefully reflected the needs and requirements which had been established during the Survey and Analysis Tasks. starting with an overview of CAD Problem areas and the EEC opportunities to which these gave rise. Finally the Proposed CAD Business Plans were given. Analysis of the data collected in the Survey Task was completed by the end of April. . and the third being in the field of component model development to assist the usage of CAD Packages. Develop Computer Aided Instruction (CAI) for CAD. and Similarities and Differences in CAD for Circuit/System Design and CAD for IC Design.EURPOEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS This overlap helped minimise delays to the total project which could have been caused by scheduling difficulties and holiday periods during the Survey Task. C. This report consisted of an outline of the Survey Task (which had been fully documented in three volumes. two of which were in the area of increasing CAD awareness and the benefits to be gained by using currently available CAD aids. a chapter on the current situation in CAD with respect to Logic Specification. Test Pattern Generation. details of the Analysis Phase activities. B. This was followed by a chapter on the Technological Evolution. The procurement organisations should pressurise component suppliers to provide as much detail as possible about components. Euronet) to distribute E. etc. Software documentation. but to produce a defined product such as ATE for μ processors. Utilise European data communications (e. G.E. an Automatic Test Pattern Generation to Automatic Test Language for Avionic Systems compiler. Day-to-day advisory service. Establish a component model data base and provide the necessary back-up service to maintain and update it and give user advice. F. . Audio Visual Aids group.) H.g. There is evidence that some suppliers would release earlier versions of their packages for a nominal fee. QUILLIN The seconded consultants would be supported by a resident team who would provide:(1) (2) (3) (4) (5) (6) Administration.B. E. in no way should this be a large organisation. concurrent logic simulator. Several instances were found during the study of teams of European CAD experts being enticed to the United States for much greater rewards and better facilities. Workshop/Conference organisation. (N. These could be provided for each member nation in the national language. and in some cases have joint projects to establish data characteristics suitable for component modell ing for CAD. Set up a number of specific product projects with companies from several member nations involved. More importantly this organisation will provide the catalyst for spontaneous development. This is not to do global reporting. then different problem areas will be tackled efficiently and effectively with a dedicated team. It should be noted that although the problems are many. Software maintenance.260 W. It is envisaged that by changing the seconded consultants each year or every other year. D. To encourage members to retain those top engineers in this field by ensuring the industry is viable. This organisation could be responsible for providing CAD packages to training establishments for student familiarisation. Provide scholarships for engineers who could be retrained in the digital electronics field for Industry.without the contacts made on the Study the organisation of such a Symposium would not have been possible. the most relevant action plans. its Project Leader. CAD Symposium CAD Education and Training 4 CONCLUSIONS 261 J.EUROPEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS I. This was the first action taken as a result of this study. It is a widely held view that this field is continuously changing and that it would currently be extremely difficult to agree and implement a comprehensive set of standards for CAD. From thorough analysis of the data collected the areas of maximum need in this subject have been identified and Business Plans for these generated. and every effort will be made to ensure the other recommendations to assist in the development of CAD for digital electronics are followed up as rapidly as possible.) Establish a comprehensive set of standards for European use for CAD. The holding of the Symposium represented fulfilment of one of these Business Plans. These would interface to electronic design standards and cover circuit simulation. This study has conducted a number of detailed interviews in the field of CAD for digitial electronics around the world. and involved a considerable amount of effort from the Commission. (The scholarships should be for 12 months and support the man on a post graduate course. with the highest feasibility were expanded in the following areas:Component Modelling. . a number of possible action plans were formulated. Hence it was one result of the study contract . testing and verification. in themselves provide de facto standards. with members of the Commission and using the expert advice of members of the Technical Committee. Interfaces allowing the standard elements to be inputs to and outputs from currently available packages would be developed. After considerable discussion within the study team. modelling. the SAGET team and Symposium organisers. and also of designed circuit elements and component models. From the set of comprehensive data collected during the survey phases together with the subsequent analysis process. It is envisaged that projects such as the total co-ordination of the Business Plan for Component Modelling would. The aim would be to allow transferability and interchange of both CAD packages and techniques. D G I I I . A.262 W. Nixdorf. for the e f f o r t .l. QUILLIN ACKNOWLEDGEMENTS and Industrial A f f a i r s . in particular the D. and consultants from Brunei University and Plessey Research Centre. the Technical Committee f o r t h i s project. Projects Bureau and t h e i r Project Leader for t h i s study.r. Project Management SEMA NIXDORF PLESSEY RADAR BRUNEL UNIVERSITY PLESSEY RESEARCH CENTRE Team Members Consultants SLIDE 1 THE STUDY TEAM . Mr.a. SAGET's sub-contractors. and Plessey Radar.E.P. Sema. DeMari. SAGET wishes t o thank t h e EEC D i r e c t o r a t e General f o r I n t e r n a l Market SAGET (Luxembourg) S. support and assistance they have a l l given during t h i s study. EUROPEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS Questionnaire A Β C AS CS CAD Users CAD Non-Users CAD S u p p l i e r s 263 A Supplementary C Supplementary Package Details Package Details S M - Synthesis Modelling/Simulation Τ - Test Pattern Generation Interviews 85 Total 57 Europe 20 U S A 8 Japan 45 CAD Users 4 CAD Non-Users 44 CAD Suppliers SLIDE 2 First Survey . 264 F i r s t I t e r a t i o n Pre-Analysis W. QUILLIN Totalisation of Answers to Decision Boxes Totalisation of Comments Made Graphical Representation of Numeric Totals Detailed Analysis of Second I t e r a t i o n Benchmarks Documentation of Current Situation in C A D Documentation of Technological Evolution Production of C A D Business Plans SLIDE 4 Analysis Phase A c t i v i t i e s .E. Nov . Oct . Sept . May . Dec . Jan . Aug . Mar . Nov 77 77 78 PROJECT TIMESCALES . July . Oct . Feb . Aug . Apr .SURVEY TASK Design Questionnaire Symposium Organisation Symposium First Iteration Interviews Design Benchmarks Second Iteration Interviews Pre-Analysis Technology Prediction and State of the Art Summary Analysis of Data Collected ANALYSIS TASK Formulate Business Plans Edit Report 3» > July . June . Sept . Establish a component model data base. Training programme by seconded experts. SLIDE 6 EEC Opportunities .g. Establish retraining for electronic engineers. Establish comprehensive standards for CAD.266 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) W. QUILLIN Influence constituent governments. Encourage retaining of top engineers. EURONET) to distribute (e). Use data communications (e. Set up specific product projects. Provide scholorships for engineers.E. Pressurise component suppliers. consultation with leading technocrats. In order to provide motivation for free expression. in this paper most of the generalised trends and profiles will be presented together with the design constraints of the data capture procedure. Thus the first survey was designed to collect data which as far as possible could fit into a prescribed format for quantitative analysis. For these reasons the first of the two-1 eve I data collection used an interviewer who was primed by a questionnaire. editor. will be given in global terms. 267 . to name but a few. Musgrave. The general structure of the questionnaire was such that it minimised the number of errors by the interviewer. adequate room for comments was always provided. However. postal questionnaire. Details of the survey philosophy and design wiI I be out I ined together with the data ordering procedure. 1979 EUROPEAN CGÍMJNITIES STUDY ON CAD OF DIGITAL CIRCUITS TAND SYSTEMS TECHNICAL PERSPECTIVE GERALD MUSGRAVE BRUNEL UNIVERSITY UNITED KINGDOM A study of this kind which also evokes several manyears of effort results in a great deal of data. workshop and conference attendance. To further encourage this collection of data which may fall outside the framework of the questionnaire the Interviewers were encouraged to seek information about new work and future research plans. After due consultation with the multi-linguistic team it was agreed that the questionnaire would be in English. COMPUTER-AIDED DESIGN S Uuembou/ig. To this end a matrix format which indicated 'functions' by rows and 'degrees' by columns was used wherever possible. EAEC. This also helped to minimise the number of formal definitions required which is a major problem in the jargon semantics of digital systems. much of which has to remain confidential to the European Economic Commission. The analysis of this data. its correlations and contradictions. by having as many pre-answered boxes for ticking as possible. It was considered essential that a 'native-tongued' interviewer would conduct the survey in Europe. Brussels oí digital electronic circuits and syitems North-Holland Publishing Company G. The questionnaire has the prescribed format for rigour and ease of analysis whereas the Interview has the form which obtains response but with flexibility to take cognisance of the situation. However. since this was technically the most comprehensive language for the field of study.©ECSC. EEC. In this project the many characteristics and attributes of the aforementioned survey techniques have been incorporated in the strategy adopted. and the general sect Iona I Ising assured thorough cover of the total field as well as providing easier data ordering. and that there would be no written translations. organisation visits and interviews. SURVEY PHILOSOPHY There are many strategies which could have been adopted for the collection of technical data. I Iterative survey. The object of the first level was to provide the broad data base which would have significance for the rest of the project. or take note of sales presentations. The strategy of structuring the questionnaire to cover the various dispositions of the organisations and the functions of the employees was needed . but cognisance of multiroles that CAD can have within a company had to be recognised. therefore the objective was to devise a set of tests which explored the attributes and limitations of the CAD program. the nonuser and the supplier. The more technical aspects were dealt with in three separate sections dealing with synthesis.K. were brought together with those of technical personnel such as designers. The planned work time for each interview was one day so there was a limit to the depth of enquiry at this first Iteration. A target of twenty Institutions was set from the second iteration survey which was deemed to be in greater depth. MUSGRA VE A single questionnaire would have been ¡deal from many aspects of the study. but very often these functions are arrived at by separate departments within a group and thus do not violate the three sections of the first iteration survey namely: Questionnaire A Questionnaire Β Questionnaire C For current and past users of CAD For nonusers of CA D For suppliers of CAD In general each of the questionnaires was identical in dataseeking aims al though the structure and details varied to reflect the interviewee standpoint. It was recognised that there would be difficulty in obtaining permission to do benchmark testing. The second iteration could not be an analytically straightforward questionnaire because it had to reflect the Individual findings in the first survey. and production and test engineers. Of course there are various masks over these classes. Effecting this required a topdown structure with details of company profits followed by questions orientated towards management and determination of company policy. It was also judged essential to validate the questionnaires and briefing notes for the interviewers by two pilot studies in each of the U. thus enabling the analysis team to Identify those institutions which could be judged to be the most useful from which to gather ¡ηdepth information. modelling and testing (S. particularly In respect of user and supplier (customer and marketer). and to what effect. This rendered a further lateral structuring to the questionnaire.268 G. Consequently the second Iteration was conceived as an indepth inter view where the first part was to consolidate the data given in the first iteration and the second an evaluation of the existing software by means of test examples (pseudo "benchmarks"). adopting the questionn aire/Interviewer philosophy would enable sufficient data in breadth and depth to be gathered to provide identification of problem areas and market leaders. the surveying philosophy of always using people for direct inter views with a well designed prompter (questionnaire) to ensure a common data base was used. which could form a potential major contribution to a European CAD work programme. particularly in view of the high cost of running complex programs. including cost accounting. The commonality is important in order to enable correlation and contrasts to be drawn. Never theless from an analytical point of view It was essential that it had some structure. In order to gain a full spectrum of views on CAD applied to digital electronic systems the marking function of the interview had to be recognised so management views. To summarise. Here the questions had to be answered by the specialist although the general philosophy to provide as many preanswered boxes to be ticked was continued. Principally three classes were established: the user. CAD was used within the company. Germany and France thus obtaining a measure of the linguistic problems as well as conducting basic field trials.. This was followed by a general enquiry of how. M and Τ respectively). researchers. Nevertheless. such as the Internal development supply and use. This grid would then provide a basic set appertaining to such factors as throughput of new designs. The final 'don't know' column was essential to enable this Important data to be recorded as well as to ensure a completed 100% response to the functions. Figure 3 covers this particular aspect. length of production runs or Indeed the technology used. In order to seek the less quantifiable data. better company ¡mage. all of which could be a correlating factor behind the use or nonuse of CAD techniques. nonuser and supplier of CAD aides. * The initial task of the questionnaire was to identify the organisation's products and then to build up a detailed picture of components used. Musgrave. reduction in lead time)? ΡI ease comment The weakness with this type of question is that the responder may merely wish to satisfy the question and not provide a totally considered view. such as the reasons for using CAD. A t the same time it was desirable to have some degree or rating of respective reasons which resulted in the columned aspects of the arrays. it is Important not to totally regiment the views and to counter this there was variation of format as well as opportunity to give open comments. M and T ) . A s a subsequence of these factors. it was essential to be as comprehen sive as possible so that the intervlewees's memory was fully primed. . The final set of general questions were all designed to ascertain attitutdes e EEC proj projects In this field. edited by G.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 269 to cover the breadth and depth requirement for a comprehensive survey. For example. were indeed present. Of course. In general the questionnaire commenced with the most general Information questions such as company operations data and gradually became more detailed to the level of Identifying software packages.g. longer term product improvement. all had three degrees of freedom. modelling/simulation or test pattern generation (supplementary questionnaires S. This technique has been used many times by market researchers and tends to give much more reliable data compared to say: A4 Question 3 Can the benefits of CAD be specified other than cost factors (e.. Figure 2 gives a typical matrix structure used. the final parts were concerned with defining where CAD systems were used within the organisation. which may be seen from response to one question. Thus a second survey of some 20Í of the first sample was conducted in order to glean the details of some existing packages and an appreciation of the future from the leading users and suppliers of CAD for the digital electronics field. to possible * A complete copy of the questionnaire and benchmark tests has been published "EEC CAD PROJECT. This results In double confirmation or otherwise giving an Indication of the confidence In the results. there were varying degrees of overlap built into some parts of the questionnaire. the cost benefits and problem areas etc. Each package warranting the completion of a supplementary questionnaire depending on whether the program was classified as synthesis. QUESTIONNAIRE PROFILE Essentially there was only one questionnaire with some change in emphasis and slant to accommodate the user. functions relating to CAD problem areas had degrees of commonality with 'How do you view the need for development of CAD functions?'. In order to ensure that important trends. and not dominated by the most recent access. their quantities and complexities on a prescribed grid system. Questionnaire and Benchmark Tests". A typical set of questions are given In Figure I. and included: 1. 5. 3. 2. 2. 3. 3. 5. 4. 5. how was a circuit modelled. Circuits considered A lgorithms used Practicability of generated designs Implementation Output modes Design verification covered two areas. these are: accuracy of the automated procedures. I denti fy package Host machines Input descriptions Output medi um Backup User prob lems Because Users and Suppliers were Identified separately these resulted in indicating the differences in attitudes towards each package between users and suppliers. Logic synthesis covered the automatic generation of a design from some specification language. it was possible to assess the degree of accuracy and efficiency. and efficiency of the automated procedures. Fault types considered Circuits considered Modelling elements allowed A lgorithms used Output modes . Information about these two characteristics was obtained by asking for details of the techniques used. 4.270 G. Form S contained questions on the subject and included: 1. Level of description and simulation Circuits considered Modelling of circuit delays A lgorithms used Extra questions covering fault simulators Output modes Form Τ contained questions only on automatic generation of test patterns. 6. By knowing about the techniques used in the package. A ll standard techniques have been included in the questions and there was room for describing the original techniques. MUSGRAVE For the more detailed information the supplementary questionnaires were used where the general information about a package was ascertained under the follow ing outl¡ne: 1. These two subjects overlap because the simulators used for verification have the same structure as sim ulators for grading test patterns. 2. 4. therefore form M contained questions on simulators for design verification and fault simulators for verifying test patterns. 3. and how was the model exercised in order to verify the design? Testing covered automatic generation of test patterns and fault simulation. 5. 2. they also show how much the users understand the packages and If the suppliers understand the user needs. and included: 1. The questionnaire investigates two import ant characteristics about the three specific topics. 6. 4. components. Others CAD USERS CAD NON-USERS CAD SUPPLIERS TOTALS 23 13 3 12 3 1 0 1 2 1 2 18 5 4 5 6 22 42 18 8 19 10 35 1 1 A similar well-balanced cover has been achieved in respect of the size of the organisations Interviewed with the range spanning those in excess of 300. OF ORGANISATIONS SURVEYED COUNTRY 4 Bene Iux 12 France 15 Germany. 4. Geographical distribution. must be included but there were other dimensions upon which the potential sample was based and some of these are summarised as follows: 1.L.S.A NATURE OF ACTIVITY Computers Te 1ecommun i cat 1 ons Instrumentation/ Process control Mi 1¡tary System/Aero Consumer Elee. Gain a spread of companies whose products covered a wide spectrum. 3. telecommunications. It was considered essential to cover USA and Japan as wel I as the European countries. 2 Sweden 8 Japan 20 U.T. aerospace.000 employees and turn-over in excess of SI000M through to the research institutions with effectively very little capital turnover and less than ten employees. of note are that: The general points worthy .K. 5. instrumentation. Of course there was deviation from the ideal sampling criteria by refusals and other factors outside the control of the project team. 2. A broad spectrum from University research through R & D departments to those organisations where CAD Is only used in production. nonuser and supplier of CAD.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 271 At all stages of the questionnaire design the expertise within the consortium was used to help perfect the system often via several trial iterations with inhouse guinea-pigs. Denmark 7 Italy 17 U. however these guidelines were used and resulted in the following profiles. Data on an excess of 80 packages has been collected. SAMPLING CRITERIA The nature of the questionnaires dictate that all three categories. computers. consumer electronics. NO.g. to processors). user. As broad a spectrum of technology as possible (e. military systems. R. A full spectrum of size of organisation. However the planned time allocation for the second iteration interviews was two days per institution and a realistic circuit would take at least two weeks to get running on any system. How advanced their CAD techniques are by the results they produce. They gave a means of assessing the accuracy of the quantised answers. c. Therefore the tests were kept very small but were designed to probe the programs . These aspects were covered in detail by a booklet which was produced for guidance of the interviewer. The tests have been designed to survey two important characteristics about CAD ρ rog rams: a. Thus information could be gained on how advanced were the techniques used in a package.E. 3.272 G. b. It was planned that approximately twenty companies be revisited on this second iteration survey. The accuracy of their results. A growing development of packages associated with testing often with related A. c. Suppliers of software (including Internal developers) have a broader spectrum of packages than the user category. machines. The structure of the second visit was two-fold. How well the users interface with the system by seeing how they coped with the tests and the problems associated with them. The facilities used by the organisation to back up the CAD packages. The interviewers used were totally familiar with the first iteration answers and had experience and expertise in the current standards practices in both industry and research centres on the topic of CAD of digital electronics. It is essential for the survey to come down to hard facts and come face-to-face with the CAD packages and to gain proof that the answers are based on interviewees' own experience with actual working programs. whether they were good standard CAD techniques or new. Firstly to gain further detailed discussions of the answers and comments made in the first iteration.which are essential for the analysis phase. a further survey was undertaken that was different in nature but had a continuity from the first survey.T. b. They gave information about how pessimistic or optimistic the answers were. They allowed a follow-up of Important comments given. MUSGRAVE 1. The users of CAD systems tend predominantly to use only simulation packages. b. They allowed an update on Important lines of development not anticipated at the time of the first iteration survey. These discussions were valuable for several reasons: a. 2. SECOND ITERATION To be able to accurately determine the current state-of-the-art of CAD in digital electronics and to help the European electronics industry. The first part of the booklet included discussions of package details which acted as a preliminary to running the tests. The technique used for doing this was the running of tests to enable information to be gained on: a. To be able to investigate efficiency fully a realistic circuit of at least 500-1000 equivalent logic gates needed to be used. non-standard techniques. The first part of the booklet prompted and aided discussions on the organisations' answers to the first iteration questionnaire. d. and secondly to apply a set of 'benchmarks' appropriate to the packages In use in the company. Efficiency in producing these results. Initialisation: this is the problem of determining whether a circuit reaches a definite state starting from an unknown state. The tests enabled relative efficiencies to be examined. The one successful package was able to Initialise two of the circuits by the use of a first order Initialisation procedure. Also the tests form an ¡deal basis for aiding the discussions on the first part of the booklet especially In respect of the problems facing both the suppliers and users of CAD packages. firstly if the package does not contain automatic test pattern generation. Some 13 tests using 7 different circuits for use in surveying the specific topics. The tests cover the subject of design verification by testing worst case hazard analysis with respect to realistic delay tolerances. Test pattern generation is covered by testing how the automated procedures can cope with deep states. TESTS As many of the packages used fall Into the simulation and testing area then many of the tests reflect the predominance of these packages by assessing the fol I owlng prob lems. What was more revealing was the knowledge or lack of It that users had when tack ling this type of problem. but failed on test 10 because circuit 2 is an order four initialisation problem. The standard technique was the use of an ambiguous gate model where the unknown . These problems proved to be very difficult and all but one package failed to initialise the circuits.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 273 for accuracy when confronted with difficult circuits that contain critical tim ing and feedback topologies. and a further 4 tests for use in surveying the specific topic of logic synthesis. this can give pessimistic results. redundant states. Both of these affect the delays in components. The former is pessimistic and the latter optimistic. synchronous and asynchronous machines. secondly if the package contains automatic test pattern generation. The logic synthesis is covered by tests on combinational. It is important to note that the problems the tests have been designed to look for are the same problems encountered in large circuits in practical use. It was generally commented that practical circuits very rarely have any Initial isation problems because the designer has become aware of the limitations of the CAD programs and thus always builds in master reset lines. and initialisation of state variables. The most common way of examining delay tolerances is by multivalued simulation. then a fault simulator will be an Integral part and the generated patterns will be put through the fault simulator. However. then there is a test with a given stimulus and fault set. fault simulation and automatic test pattern generation. Timing analysis: one of the most important problems is to determine whether a circuit will function correctly under any variations due to manufacturing toler ances and field environment. This problem can be tackled by simulation in two ways: multivalued simulation or twovalued simulation using a number of starting states. and critical timing. The fault simulation Is covered in two ways. The standard technique used was to set the circuits' memory to the unknown Χ state and then to simulate a homing sequence. A ccurate solutions to the problem require pathtracing or algebraic techniques but very few current CAD systems have solutions which work in complex CADs. In fact many organ isations have placed constraints on the designer in order to overcome the problem. These problems also proved to be very difficult for the few packages that could handle delay tolerances. design verification. this means that there Is no functional description and so the program cannot tell what Input conditions are likely to generate hazards.In general this is an area which has had a declining amount of attention in the last five years because many people believe it has little to offer particularly in respect of cost savings. This construction is explained in two examples In Figure 3 which show how the score can vary between I and 8. Reasons for using CAD and Needs for the period 1979-1982. apart from the efficiency problem.274 G. how the test sequences were graded. It was generally commented that this problem had to be overcome by further development work to produce good and accurate worst case analysis because there was no known work that offered a satisfactory solution. The results of the survey indicate that few establishments have this capability although an increasing number of research organisations are attempting to address this area. Test generation and fault simulation have all the problems of performance prediction and much more severe problems of efficiency. Thus information was collected on the fault models considered. In a negative way these tests help correct some of the misconceptions of the first survey in that some of the research establishments who claimed to have programs in this field. This technique was the cause of many pessimistic results and the failure of many tests. Automatic Test Pattern Generation:. with only global figures so that no individual company may be identified. However. a system of weighting was used which effectively emphasises the deviations from the norm.the tests for this area were graded and many of the programs being used were only capable of handling the less sequentially complex tests. Nevertheless the tests were offered because of the potential application of state assignment and minimisation to certain areas of IC designs such as cells/arrays structure. and the degree of automation achieved and how much human interaction was required. Many such programs rely on a gate-level description and. Considering the question of 'CAD problem area' with reference to Table I and the comments obtained on the criteria offered could be summarised as follows. when confronted by the tests declared the programs to be unfinished or incapable. For reasons of presentation of the data where the 'peaks and troughs' are the Important aspects In valued judgements. ANALYSIS AND SOME RESULTS FIRST ITERATION DATA The first iteration of the survey gathered a very large amount of data which would have been extremely difficult to nandle had it not been for the matrix formating of much of the information which lent Itself to computer data processing. Synthesis Tests:. MUSGRAVE value X is given at the output of a gate during the min-max times. A . Despite this superficial consideration of the data it nevertheless gives a useful guide to trends and contradictions as well as serving as an example of the data sets accumulated in the survey. As much of the data was gathered under the agreement of confidentiality it is not possible to reiterate it in detail in this paper. Very often in the more experienced organisations they have resulted In direct discipline of the design team. Fault Simulation:a separate set of tests were used from the above because there is a high predominance of programs where the fault simulator is used to establish test sequences by either manual or random. Of course In all cases if the user is sufficiently experienced there are heuristic techniques which either avoid or overcome the problems. a subset of the questions are presented namely: CAD problem areas. or a combination of both test generation procedures. Some interesting points which arise from this table and other aspects of the report are that CAD contributes more to the design of LSI hardware than to any other area of possible application. In fact no matter what dlcotomy of data is used al I groupings want better CAD packages to handle higher complexities of circuit design. However. This may point to an opportunity for any high technology nations with access to nations with low labour costs. there is a 'spill over effect' In two ways.E. The Interesting comparisons show that USA and Japan were scoring highly with a reasonably smooth histogram whereas Europe scored lower and had a surprisingly large spread of answers. Once CAD Is applied to other lower technologies although .C. The universal constraint upon progress is the cost of. needs and potential for the future. are reducing partly because of the application of CAD to the hardware cost. and industry in particular. Of course an important part of the study was to collect evidence of trends.E. Hardware costs. and efficiency. and investment in software programs. almost without exception. It was also cited as the reason why many cost evaluation systems are abandoned by the quality baseline changes. although the sets worthy of most comments cannot be given other than to say that some of the greatest enthusiasm for E. CAD packages for top down design approach are consistently looked for/hoped for! A number of possible E. The questions were broadly divided into considerations of manpower. Strangely. all areas look for new packages to handle the ever Increasing complexities demanded. project tended to come from nonmember countries! It can be appreciated that there is a mass of data correlations that can and has been done as part of the project analysis of the first iteration survey but this was not done In isolation from the second iteration data. projects were offered to users of CAD with suppliers being given a different format. Some of the Interesting comparisons/correlations are: software costs are seen as a serious problem. time saving and product quality Improvements. To indicate the capability of the technique Table II gives the scoring for suppliers only. In contrast to software costs. In order to show how CAD had already assisted technology. lack of theoretical understanding was not thought to be a problem whereas the development of high complexity packages was thought to be a problem. Comments made by the companies on 'CAD essential for' and 'CAD difficulties in' are added to give emphasis to important points made by key people.C.) Improvement of design quality Is. However. Also following from'this. In general there were a very high number of supplementary answers or additional information offered with this table indicating a high level of Interviewee involvement. All of this data contributed to the analysis phase of the project. The global results are given in Table VII. the single most important reason for using CAD with most countries rating this reason very highly. a number of questions were asked about the reasons for using CAD. which links with the European concern regarding the social effects of runaway technology (reference opening address of symposium). printed circuit boards and complete sub-systems with the size of the company and the size of computers used. together with the recruitment of skilled personnel. The tabulations of this data are given in Table V for the world and Table VI for suppliers only. Closely linked with this improvement In quality is the factor of increased complexity achieved at system level or dictated to by component technology. The tables link the percentage of CAD used In the design/production of integrated circuits. In order to give an Indication of the depth of treatment a table (Table VIII) shows the areas of application of CAD by a group of principal users. (Note: the European answers represent 64ί of the total survey. For the total study the histograms are shown In Table III whereas the totals for Europe are given In Table IV. therefore views of all Interviewees about the 'Needs for the period 1979-1982'were sought.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 275 general concern of how to cope with the retraining and recruiting of personnel. Although the problems set In the benchmark tests are artificial they showed how poor the packages were In that no program was able to handle the three tests. This also gave an Insight Into how well the users could cope with such problems. Where there is an X-state on a node then a hazard Is assumed. One technique is to use a different model that is approximately the same functionally that will initialise. Table X Timing Analysis Tests This table gives a catalogue of some of the experiences of applying the four timing tests to those packages which purport to be able to cope. however. The group of tests outlined in the earlier part of the paper were used extensively In the second part of the survey and because they were in effect assessing individual packages only tabulation of that data which gives general characteristics Is present here with the following comments. a. The standard technique for initialising is to set the circuit to the unknown X state and then simulatea homing sequence: the circuit is initialised if all states disappear. In fact it was alarming to find a high proportion of users who were totally oblivious to this problem. This technique. is pessimistic because the simulator cannot deduce the fact that with state variable signal say a. The standard technique was the use of an ambiguous gate model where the unknown value X Is given at the output of a gate during the min-max times. In practice. Table IX Initialisation Tests The problem of Initialisation does not cause very much inconvenience in practice even though the problem Is still far from being solved. This is the reason so many systems failed in these tests. Table XI Fault Simulation Test The fault simulator was considered by many organisations as the principle tool of the test engineer to aid test program development. for example It is common to use an edge triggered J K latch In place of a master slave J K latch when the set and reset lines are not in use. there Is evidence that there is acceptance albeit Indirectly in many cases. This technique is pessimistic in a similar way to the initialisation standard technique but is much more severe because the problem cannot be manually overcome. Due to the fact that the full Initialisation problem is very difficult and a circuit has to be initialised in some way prior to a simulation run there have been a few good heuristic techniques developed that work around the problem. These problems showed that nearly all of the systems did not have worst-case analysis techniques capable of dealing with all types of critical timing. In general it was commented in the interviews that better techniques have to be developed to cope with all cases of critical timing at all levels of modelling especially at the systems level where the problems In using the standard technique are most severe.a = 0 when a = X. Another technique that is not so common is to set the memory of a circuit to a random pattern of O's and I's at the start of each simulation run. thus by simulating a homing sequence many times the probability that the circuit will initialise can be determined (monte-carlo analysis).276 G. MUSGRAVE initially used in the LSI department and secondly when the CAD tool was purchased for testing applications (because of high costs in this area many packages were purchased for this aspect) there is a spread of the technique to other departments. If this Is not available or does not work then the circuit is forced manually to a known state by the user. the initialisation of a circuit is achieved using the standard technique. As has been outlined in the . So despite the many misgivings given In comments upon the unwillingness of good designers to use CAD tools. U. These comments also have a bearing on the component modelling library which Is essential for effective commercial utilisation. These can be usefully commented upon under the headings of modelling. d. in fact only a few of the investigated systems had this capability in the true sense. times and only achieve 80? cover which shows the pointer to the L. Notwithstanding these variations the tabulation shows the Importance of language. Those packages which have been the most successful have had a component library which has been maintained and . by an order of magnitude. c. Table XII Automatic Test Pattern Generation The ATG programs for sequential logic were not widespread. simulation.T. Functions of the primitives used and indeed what primitives had been preprogrammed. A.G.T. testing ρ rob I ems.S. Table XI I I Efficiency This is the sort of tabulation which has great import but in no way must the figures be taken too categorically for the following reasons. some of these being calculated by hand. Thus the solutions must be In the ability of the programs to have several modes for different applications but all of which are data compatible. ideal for design verification. and delays. virtually impossible models for A. The test was a relatively small circuit and was only a single circuit example. It is also obvious that these functions can change the clock pulses/sec. Of course the picture is not complete until the relative running costs of the various machines are taken into account.P. Despite these pitfalls it is interesting to note the diversity of the software performance on currently offered programs. The primitive of a unit delay gate Is ideal for test pattern generation in terms at least of ease of generating the tests but it suffers from the problems of overheads associated with large circuits. This method is acceptable because the project was not to obtain a close ranking of packages but to ascertain general capabilities and trends in techniques and philosophy. This nevertheless must be viewed as a stop gap solution and in the long term a single model must be sought.u. are the basis for test pattern generation. primitives. Many of the printout data did not give compatible Information on simulation c. and lacks the accuracy needed for design verification.p. But the dlcotomy arises when the more accurate models which reflect detailed timing Information and extra functional behaviour. Different versions of the programs were being used without proof of versions being given. In short. (Topology can play an important factor. The tabulation shows how with very simple circuits it is possible to run up some long C.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 277 state-of-the-art lectures of this symposium there are a number of types of fault simulator and the evidence given In the table Indicates most are favoured by some organisation giving a wide spectrum of results.I. a. Model I ing There are many conflicts that arise when a particular model is exercised for any application. times.G.) b. General Comments on Test Results Clearly the tests and the results obtained were not wholly objective but that FAIR subjective evaluation is necessary in order to collate the often diverse Information. and system integration. The technique for fault simulation that is most widespread in practice is the parallel fault simulation.) Hence the need for suppliers to fully educate the users In the limitations of their packages and the manual techniques for overcoming the same.E. deductive and concurrent. CDC-60 bits per word gives 59 faults per processing pass). parallel. This also Indicates a need for full appreciation of all the different types of simulators and suitable applications. This ignorance could be forgiven in the case of initialisation because it is not a serious practical problem at present. It Is they who were able to claim important economies by using CAD tools In digital systems. There was little evidence that testability was being fully evaluated at the early design stages but there was evidence that the post design test pattern generation was beginning to dictate requirements on designers. This is due to the fact that apart from the serial method (processing one fault at a time) the parallel method is the easiest to implement and gives good results without using excessive computer time especially If a large number of bits are used for processing (e. A. System integration This area Is one of the most important aspects of CAD in that the degree of integration can have very profound influence upon the overall cost effectiveness of CAD. In fact it has been suggested in future IC manufacturers should supply an accurate and complete computer model for their components. MUSGRAVE documented. It is apparent that few companies have been able to exploit for example a common data base or even a common data base management system from conceptual design through to test programs and A. interfaces. Some simulations had this ability automatically built into the program. There are many programs using a number of techniques in fault simulation. but clearly as circuits become more complex higher orders of ¡ndetermi nancy have to be coped with. Certainly there Is evidence that the lack of a component library or lack of information of the models used In the library are undermining the use of CAD. The standard technique of resetting all memory elements to an unknown state prior to simulation in practice is adequate for the majority of circuits being designed in industry. The more general observation to be made was that many of the users were unaware of the problem and its potential solutions (details given in previous section).T.G. Logic simulation As the tests on Initialisation required up to fourth order ¡ndeterminancy it was not surprising to find most packages failing on these examples. often a combination but the essential classes are serial. (Basic primitives used also dictate non-functional testing.T.278 G. As the technology dictates greater complexity there is a need for closer liaison between component manufacturers and their customers. This is far more important than initialisation because it is far harder to avoid critical timing situations especially for nonfunctional test sequences which are in vogue at the present time. however. Often this has been easily achieved if the package has been an internal development where the library can be restricted to suit a narrow user community. . the same malaise was apparent with worst case timing analysis. Certainly this work has been the prerogative of the large organisations.g. and fault simulation The application of fault simulators is ever increasing as the testing problem continues to increase product costs. . Lack of understanding and appreciation of present CAD techniques by many practising engineers. 3. Essentially the research is considering solutions which by their nature are a little tenuous and certainly long term. are not detailed. Specification tools:. simulation and verification and test pattern generation. What is interesting Is that. Most of these are based on theory that was developed in the mid-60s and at present few people are working on say heuristic techniques to achieve the sort of efficiencies that are needed for the complex systems.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE CONSIDERATIONS FOR THE FUTURE The project team considered aspects of present research work and publications under the headings of logic specification and synthesis. For the same reasons aspects of technological evolution both from a components view and its effect on computer architecture for CAD. educational.It has been clearly identified by all Involved In this subject area that there is a need to be able to specify a total system and evaluate that system withvarious Implementation options. As many of the leading researchers in this field are contributing to the symposium It was considered unnecessary to report the findings here. namely research. although a bibliography at the end of this paper may be useful to the readers. They all appear to be extremely short of resources in terms of computing power and man power and certainly acutely short of finance. Ever increasing complexity afforded by future technologies 2. Algorithm development:. In this section three headings have been used to group the problems. and industrial. In respect of the educational section some ideas will yield short term benefits. but If the basic problems are to be overcome this must be in the fundamental training level of electronic system designers and hence this will yield benefits in the longer term. In general the research tasks which could yield the Important solutions to problems in CAD may be listed as follows: 1. Very few of them can be tackled with autonomous solutions. Note that this transcends any software or hardware partitioning. CONCLUSIONS 279 There are many problems that have been revealed In this survey and analysed for this study. they all required multiple solutions but in essence they stem from two factors: 1.All digital systems have analogue interfaces with their inputs and outputs. Analogue interfaces:. There is virtually no research progressing which Is attempting to embrace the analogue circuitry with the digital circuitry from a total 2. In the industrial section specific projects are identified which could be undertaken in a much shorter time with a more definite return. In contrast with the USA the groups do not appear to be in close communication and certainly are not benefitting from each others' activities as well as they might.It Is recognised that there is a need for improvement in the algorithms for simulation and test pattern generation programmes. Research In general there are small groups and teams of researchers working on a number of problems In the CAD field throughout Europe. . 1. not to mention the differences of their inherent users. MUSGRAVE systems evaluation and reliability point of view. 4. maintenance schedules and diagnostics. This pragmatic approach is the only means of coping with the complexity of total systems. Some people solve their problems both In the combined analogue/digital and the solely digital world by partitioning. production models. although highly desirable. In general the following are a set of projects which are considerably shorter termi than those under research which could be adopted with some predictable successes In Europe. For example many of the following functions require separate data: conceptual design through to prototype evaluation. Software management:. engineering drawings. This could be extremely successful in providing software and hardware for solving specific problems and at the same time give Europe experience independent from the USA. This would also be used to establish a component data base with the appropriate back-up service to maintain and up-date it and give user advice. Predominantly the researchers in this field are geared to large software systems. because of their existence would effectively become adopted as standards.The potential of computer aided design can never be fully utilised until the problems of software.. machine independence and general data based management stages have been fully developed. Although there was a desire to have established standards for component libraries. They also believe that the imposition of standards from above is unlikely to gain great acceptance because of the already existing and confusing American. but there has been no fundamental research determining the dicotomles of the system. portability.280 G. It could also utilise a European data communication system such as Euronet. peripheral terminals etc. Components data base: One of the Important problem areas that is expending many man years is to establish the correct models for existing simulation programmes. components list. many of the practising engineers believe that these standards. One of the problems is that the component suppliers are not providing the necessary detail in order to make the models sufficiently accurate for the more sophisticated CAD techniques. are unlikely to gain acceptance until this general field has been established for a number of years. test schedules. Industrial Many of the establishments surveyed considered there was considerable merit in establishing collaboration throughout Europe on projects which could give immediate benefit. Therefore a European collaboration to use the total procurement power of its members in order to pressurise component suppliers to provide such detail could be invaluable. CAD software. Often there is little integration amongst the various functional packages. Projects: To set up a number of very specific product orientated projects with Companies from several member nations. Japanese and multiple nation European standards. A typical set of products which could be developed are:- 2. A much better approach would be to establish working tools which. and advisory services. This should be undertaken as soon as possible in order to prepare the ground for the longer term member nation re-training programme. b.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE a. numbers employed. 281 d. e. To a similar end. demonstrations and above all a programme of workshops which could be based as a European project. re-education. Medium term: To develop an appropriate syllabus. This calls for a whole set of strategies pertaining to the education. To identify throughout the member nations establishments where there is sufficient upto-date expertise and facilities to enable institutes for retraining of engineers to be established. Although It Is difficult to establish a common programme throughout Europe there Is nevertheless sufficient commonality in this problem area to suggest that solutions both on a European and member nation level are viable. . and lecture notes. It is suggested that by secondment of experience in the CAD field the Commission could provide a detailed syllabus. software documentation. This could provide the focal point for various courses at both engineering and managerial level. and more Importantly provide the basis of education of the educators for their ongoing roles In the respective member nations. avoiding the enticement to the US for many of these people. e. Educational Throughout the survey stage it has been thoroughly established that there is a shortage of good digital circuit design experts in Europe. lectures. skill categories of those employed. training and re-training of existing personnel. An ATE for microprocessors A concurrent logic simulator A general purpose compiler from automatic test pattern generation to say automatic test language for avlonic systems (Atlas) An associative processing machine for symbolic manipulation etc. The strategy for this problem Is outlined below. as well as using video-tape medium to provide the much needed detailed tuition in existing CAD techniques. f. To establish within industry a number of viable projects similar to the above which will help retain top engineers in this field and give them a meaningful role within European CAD. Long term: To Influence member governments and for them to turn to influence certain University/Polytechnic courses to reflect the Impact of the 'digital revolution'. This organisation would need to be supported with demonstration. scholarships should be provided for practising engineers In the digital electronics field to gain the knowledge available at research institutes particularly In the US To establish the differing employment profile the future extensive use of CAD will have and to predict from this the variations required In training schemes. numbers to be trained. Such research into social aspects of CAD was a frequent request during European first iteration interviews. and prevent the development of similar facilities in parallel around Europe. to make a start in plugging the gaps that exist in CAD knowledge. Many of them may be inappropriate for EEC action. ACKNOWLEDGEMENTS Clearly this paper reflects the work of a team of people. The problem identification and solutions cannot be listed in order of priority nor can they be taken as individual items as they all reflect the tremendous need for a totally co-ordinated and integrated approach to this very important developing subject. which prevents optimal use being made of the scarce CAD design resources.282 G. It can be justly claimed that the study itself has had a useful catalytic action and the very fact that this paper is being presented at a European symposium with many guests from outside the community is a useful step. conference/colloqulm facilities which many more practising European engineers could attend. . It should be said it almost does not matter which solutions are developed as long as some solutions are developed today. colloqulms and conferences In this field are held in the US. facilities and systems. regular. They also tend to be orientated to the general capability and economic structure of the USA and there is a need to establish within Europe suitable. and all their contributions are recognised and in particular my research fellow Phil Moorby who has devoted himself to this project over the past year. This will provide the Important grape-vine communication system that is essential for the solution of day-to-day problems. namely SAGET consortium. but through its technical representatives and various committee structures it could provide the necessary catalyst for action within the member nations. MUSGRAVE Short term: Most of the important symposiums. . A TYPICAL SET OF QUESTIONS ESTA BLISHING COMPANY'S PRODUCT PROFILE 5 I terns designed each year 0 1 2 3 4 5 6 others 7 8 ρ lease specify SS I/MS I Custom LSI Microprocessor Memories Computer subsystems 1 210 11100 >I00 6 In o r d e r t o a p p r e c i a t e t h e c o m p l e x i t y o f your design o p e r a t i o n . I. g . S e p a r a t e l y t e s t a b l e p a r t o f t o t a l system.000 DefI ni t i ons: IC I n t e g r a t e d C i r c u i r s .001 100.000 100.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 283 FIG. w i t h a c t i v e components o f any c o m p l e x i t y .000 1. PCB P r i n t e d C i r c u i t Board.000.000 Gates 101 1001 10. D i s p l a y sub system. Logic component.00Ü > 1. from a few gates t o m i c r o p r o c e s s o r on a s i n g l e chi p.000. please i n d i c a t e the approximate range o f c o m p l e x i t y i n terms of e q u i v a l e n t l o g i c gates f o r t h e areas o f a c t i v i t i e s . e . Subsystem. IC PCB Sub systems I II 10 100 1000 10. 284 FIG. TYPICA L G.2 REASONS FOR USING CAD Savings in manpower: design production testing maintenance Time savi ng in: design production testing maintenance Improvement of design quality Documentation Necessi ty: increase c o m p l e x i t y i n c r e a s e of work load A Β C D E Don't know s h o r t a g e of s k i I led man power 6 Provide common database f o r t o use A b i l i t y to evaluate designs different all 7 A b i l i t y t o chance s p e c i f i c a t i o n 9 Research a 10 1 1 \¿ \i others ρ lease sped fy 14 . MUSGRAVE MA TRIX STRUCTURE A Β C D E = = » ■ essential s t r o n g reason neutra I weak reason no reason a t a I I A. 2. 9 χ 4 + 13.6) 5. Β 6.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 285 Examples of weighting used in the following Tables CHART CONSTRUCTION (A) When there are five possible answers (Table I) A Β C 18.80 resultant weighting Reasons for Using CAD A = Essential Β = Strong Reason C = Neutral D ■ Weak reason DK = Don't know (B) When there are only four possible answers (Tables IV and VI). D = 2. FIGURE 3 .3 E DK 3.6 χ 0)/(Ι00 6. C = 0.6 j DK 6.38 resultant weighting A C D Problem Areas Serious problem Minor problem No problem EEC Opportunities Very Interested Little ¡nterested Uni nterested Needs for Period 197982 A urgently needed Β may be needed C Unimportant DK Don't know NOTE: The weighted value or score can only vary between 0 and 8.7 Example: (45. E = 0. Don't know discounted 45.2 + 2 χ 7.6 29 40 1.8 A = 8. Example: (8 χ 29 + 6 χ 40 + 4 χ 18.7 χ 8 + 33.9 C 13.8) = 5. C = 4.8 I Weight: A = 8.3 + 0 χ Ι.2 Example: Savings In man power D 7. the weights are: A Β 33. Β = 4. Don't Know discounted.8)/(Ι00 3. 286 COUNTRIES: G.\7 '//. A 7 VA7/ 77 't // //. /7 / /// 7 'ι 71 777. '// 7 7/ 77ν»y. '// i à/λ 77 1 èη. f /f ft/ ι / / ιI hV/. 7/ //. TABLE I . MUSGRAVE EUROPE ♦ USA ♦ JAPAN CAD PROBLEM AREAS most serious CRITERIA Capital costs of CAD software Capital costs of CAD hardware Running costs (hardware cost and manpower cost) Retraining of personnel to use CAD Recruiting of personnel skilled In CAD No single comprehensive data base covering all CA D operations No single comprehensive input coding description language covering all CA D operations Inadequate CAD packages to cope with the complexity of ci reu its Gaining confidence in technical results produced by CAD Inadequate algorithms for CAD Man machine Interface peripherals Inadequate CAD packages to cover hybrid electronics 3 4 5 6 7 77. V. 7 f / / // // / 7 / . 7/ 7/ 7// 7 / / 7/ 7 7/ 777 7 7/ 7/ 7/ 77 7/ 7/ '7/ '7 7/ '7/ 7 77 ι / / / / / / / 1 / / w / '// / / r / 7/ 7/ /. r / / TABLE I I .EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE COUNTRIES: EUROPE + USA + JAPAN Suppliers Only 287 CAD PROBLEM AREAS most serious CRITERIA Capital costs of CAD software Capital costs of CAD hardware Running costs (hardware cost and man-power cost) Obtaining skilled personnel for research and development work Lack of theoretical understanding Developing packages to cope with complexity of circuits Inadequate algorithms for CAD Man-machine Interface peripherals Developing packages to cover hybrid electronics Too small a market for packages //. MUSGRAVE EUROPE + USA + JAPAN REASONS FOR USING CAD essential CRITERIA Savings i n man-powe r d e sign Savings i n man-powe r p r o d u c t i o n Savings i n man-powe r Savings i n man-powe r maintenance Time saving i n de sign Time s a v i n g in p r o d u c t i o n Time s a v i n g i n te sting te sting 1 2 3 4 5 6 7 8 777/ 7/77 7 77/7/ 7777/ 7/ 7/7 % // % A A A 7777y> 7 77 % //.288 COUNTRIES: G. 77 7/ A /// 77 7n TABLE II I . %. / 77■7/ 7 7/77 d i f fe re n t 777/ 7 7 777/ 7 7/ ///I 7/ 7 77 A 77 7/ 7 777/77 777/ 7 7/ 7777 7/V 7 /7VAA A77 /. '// / Time saving in mainte nance Improvement of d e sign q u a l i t y Documentation Increase c o m p l e x i t y Increase of work load Shortage of s k i l l e d man-powe r P r o v i d e common data-base a l l t o use A b i l i t y to e valuate designs A b i l i t y t o change for 77 777/. / spe cification Research and de ve lopme nt t o o l A t 777/ 7/77 '7 7/. \7///. 7 777/ 77 'A 7 77777/7 1 7 7 77. t. t. A A Ì 77777 A.. 777/ 77 7 77 7/7 77/A77At I '77.A A AA A. 7 7 7777777't different A b i l i t y t o change s p e c i f i c a t i o n Research and development t o o l A. 777777'/jf. 7777 7///. 1 77777/ 77/7 7 TABLE IV . 777777/·„ 77 77 777/7/7 777777V/f. 7/ 7 77/ / / / .EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 289 COUNTRY: TOTAL EUROPE REASONS FOR USING CAD essential CRITERIA Savings In man-power design Savings i n man-power p r o d u c t i o n Savings In man-power t e s t i n g Savings In man-power maintenance Time saving In design Time s a v i n g In p r o d u c t i o n Time s a v i n g i n t e s t i n g Time s a v i n g i n maintenance Improvement of design q u a l i t y Documentation Increase c o m p l e x i t y Increase o f work load I 2 3 4 5 6 »7 8 Shortage of s k i 1 led man-power P r o v i d e common data-base a l l t o use A b i l i t y to evaluate desIgns for 7777 7/77. A A A. MUSGRAVE EUROPE + USA + JAPAN NEEDS FOR THE PERIOD 19791982 most needed CRITERIA Single comprehensive database covering all CAD operations Single comprehensive Input description language covering al 1 CAD operations CAD packages able to handle higher complexities of circuit design University/polytechnics to teach more CAD in their courses More CAD courses to be provided for engineers in Industry Better manmachine interface peripherals Adequate CAD packages to cover hybrid electronics CAD packages for top down design approach (conceptual specification to detailed implementation) CAD packages to cope with conceptual specification independent of hardware or software implementation 3 4 5 6 7 '/// /A VA77 VA VA A ¡ 1 7η V/ VAAA'/A 7 '777/ A% A '/ A A // /// // /// /// /// /// A/ / / A/ / / /// /// // '7 77 77 A ' 7 A 77 777 7/ / / ■ 1 A.290 COUNTRIES: G. 1 1 % TABLE V Vi .VA VAVAA 77 % / j A A // /A 7// // /// /// / 7)/ //. EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE TOTAL: supplies only NEEDS FOR THE PERIOD 1979-1982 most needed CRITERIA Single comprehensive data base covering all CAD operations Single comprehensive input description language covering al 1 CAD operation CAD packages able to handle higher complexities of circuit design University/polytechnics to teach more CAD in their courses More CAD courses to be provided for engineers in industry Better man-machine Interface peripherals Adequate CAD packages to cover hybrid electronics CAD packages for top down design approach (conceptual specification to detailed Implementation) CAD packages to cope with conceptual specification independent of hardware or software implementation 3 4 5 6 7 291 w A77/ 7 I 1 i t I7i I 1 VAI § A//AA7 A i A 77 1I I ι 1i I \ % % ''. 7/ % I ι TABLE VI . '/ι % % '7. 292 G. MUSGRAVE INTEREST IN EEC PROJECTS most needed CRITERIA How i n t e r e s t e d would you be i f an EEC p r o j e c t was s e t up t o o r g a n i s e a c e n t r a l comprehensive Database for d i g i t a l electronics? How i n t e r e s t e d would you be i f an EEC p r o j e c t was s e t . 77 7777 7. /AA/à 7777/ '/ '// 7 77 7/.u p t o p r o v i d e a CAD computer s e r v i c e ? How ¡ n t e r e s t e d would you be i f an EEC p r o j e c t was s e t up t o p r o v i d e a CAD computer package? How i n t e r e s t e d would you be 1 f an EEC p r o j e c t was s e t up t o p r o v i d e CAD standards? A//'A V 'A A 7. A 77. '. 77 A 7/ 7 7/ '/ 77 77 / / / 7/ TABLE VI I . GENERAL ACTIVITIES OF COMPA NY CO. SIZE TYPE OF PACKAGES COMPUTER USED CORE A VA ILA BLE CORE REQUIRED PERCENTAGE DF CA D COMMENTS ESSENTIAL FOR: ON CA D IC PCB SUB SYSTEMS DIFFICULTIES IN: Computer Man. i prod. Better quality Speel f 1 cat Ion to design translation PCB design and checki ng en -Η σ Custom LSI design Test Equipment M Commer cial ICL 1900 37KW I8KW >90 1 1 I l\ LSI design quality Model 1¡ng Simulation Testing Production S I n house INTERDATA 832 IRIS 80 IBM 370 200K min >90 Ι Ι Ι λ Only for i ncreased com plexity (LSI) Retraining of personnel Production S I n house INTERDATA 832 100 I I I ) QualIty and comp 1 ex 1 ty Comp 1 ex 1ty/Tech nical software costs Engineering planni ng Design Automation L In house & Commer cial CDC CYBER I 74 DEC 15 MOD COMP NOVA 32KW >90 / / / Λ >90 1 1 l\ >90 I I I Ι Λ Manpower savings Reduction in lead time of 50Í Product quality Higher complexities Capital costs Obtaining ski 1 led men . M Comme ι— cial IBM 370/168 IBM 360/91 UNIVAC 1108 SMC 3100 >90 t 1 1 l\ >90 ' ι ι ι >90 1 1 I l\ Production design and test ing. M In house 4 Commer cial UNI VA C 500K >90 1 I 1 h <I0 O Testing general quality Cost effectiveness M i 1 i ta ry electronics dev. The need for a data base Large system eng i neering L In house MITRA 15 32 KW 16 KW >90 1/ / A < 10 Testi ng Complex ICs Software costs Manpower costs and training 1 nadequate Algorithms Production L I n House System SEL 325S >90 11 1 li ¿. 10 Performance before con struction (cus tomer req. in terms of turn over .) Man power savings Communications between di sci pi i nes and departments Computer Manufac turer L I n House 4 Commer cial DEC 10 GT 62 75KW 5I2KW > 90 / 1 1 l\ >90 / / Ι λ <.50 1I Ì \ Very much an IC design. Reduction In lead time. medi um. SIZE TYPE OF PACKAGES COMPUTER USED CORE AVAILABLE CORE REQUIRED PERCENTAGE OF CA D COMMENTS ESSENTIAL FOR: ON CA D IC PCB SUB SYSTEMS DIFFICULTIES IN: Computer 4 Te 1ecomm unicatlons L Comme ι — ciai IBM 370/158 >50 General savings but not yet essentia 1 Obtai ni ng ski 1 led personnel.GENERAL ACTIVITIES OF COMPANY CO. small. Tool for R 4 D IC Design KEY: large. .B.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE TABLE METHOD USED Ist order initial i sation 1st order Standard 3 valued logic standard 3 valued logic standard 3 valued logic 295 INITIALISATION TESTS COMMENTS required an option to be given to run special routine for 1st order Initial¡sation INITIALISATION RESULTS ci rcui t Initial¡sed correctly clrcuit initial¡sed correctly not initialised not Initialised not initialised user did not know about the special routine for 1st order initialisation user did not know about the special routine for 1st order initialisation N. All other establishments were unable to do the test. J K flip flops do not recognise dynamic hazards on clock Input no pessimism on negative edge of SELECT because tolerance given to SI S2 gates only accurate results obtained HAZARDS DETECTED hazard on output 33Í tolerance path tracing ambiguous gate model no hazards hazard generated around feedback loop minmax multi va 1ued only on positive edge of SELECT 33Í tolerance path tracing 33Í tolerance path tracing 33i tolerance path tracing probabi1 i ty func tion of the components probabi11ty func tion of the components probabi1 i ty func tion of the components probabi 1 i ty func tion of the components mi ηmax muIf 1 va 1ued only on positive edge of SELECT only on OSC accurate results obtained no hazard on output no hazards detected for the wrong reason successful run due to its probabi lity based algorithm.296 TABLE X METHOD USED minmax 3 valued G. MUSGRAVE TIMING A NA LYSIS TESTS COMMENTS standard technique fails on this ci rcuit successful run due to its path tracing simulation crashed after 128 pulses when feedback was active. but very slow no hazards no hazards accurate results obtained only on OSC accurate results obtained hazard on output algorithm could not detect that no hazard Is on the output hazard generated on both edges of SELECT at output pessimistic hazard on negative edge of SELECT . 077 0.33 0.56 1 .EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE TABLE XI METHOD USED serial seri a 1 para 1 le 1 deductive deducti ve deductive paral lel paral lel 297 FAULT SIMULATION TEST COMMENTS l| KB store used PRIMITIVES USED functional FF functional FF gates gates NAND gates NAND gates gates functional FF FAULTS/CPU-SEC 0.B.9 6 KB store used.1 47 faults simulated/pass no times supplied TABLE XI N0 AUTOMATIC TEST GENERATION TESTS TOTAL CPU TIME 1320 sees 194 sees 7 sees 15 sees 3 sees 1 sees <8 sees 2 sees 19 sees 414 sees 318 sees 7 sees 65 sees ^TTÍST^ ULTS 4M 367 50 84 85 (A) (A) (B) (B) (B) NO. OF PATTERNS 677 798 12 16 1 1 1 1 14 4 1 1 110 149 62 68 FAULT COVER 90% 88% 100* 100% 100% 100% 100% 100% 98% 87% 85% 81% 81% 85 (B) 52 (B) 30 (B) 45 (C) 54 (D) 474 (D) 78 (D) 72 (D) N.033 0. The tests got progressively more difficult from A to D .38 0. 31 faults simulated/pass 26 KB store used 3. 2 8 Fortran assembler S low Smal I Fast Large Fast Large Fast Large Fast Large Fortran Fortran Fortran Fortran Fortran NAND gates NAND gates functional FF functional FF 10 85 21 23 34 gates unit .TABLE X I I I EFFICIENCY DELAYS CLOCK PULSES/CPU SEC MACHINE CAPABILITY AND SIZE LANGUAGE PRIMITIVES USED S 1ow Med i um Fast Large assembler assembler assembler assembler assembler/LST functional FF gates gates gates functional gates gates functional FF typical typlca1 uni t typical 54 39 54 408 Slow SmaI 1 Medi um SmaI 1 Fast Large zero unit unit asynchronous zero synchronous unit unit typical unit 4 20 0. TABLE XI I I EFFICIENCY (CONTINUED) MACHINE CAPABILITY AND SIZE LANGUAGE PRIMITIVES USED DELAYS CLOCK PULSES/CPU SEC Fast Large SIMULA 67 functional FF probabi 1ity va 1ues typica 1 typica 1 2.5 .8 2.2 Medi um Medi um Med i um Med i um Fortran Fortran gates gates 7. Ulrich.A Computer Program for Logic Simulation Fault Simulation and the Generation of tests for Digital Circuits. Schuler. . or IEEE Computer Soc. Bell. Duly. . 1973 Clare.The LOGOS System. on Computers CI 7 1968 pp. . pp.G.G.Specification and Design Languages for Logic Systems.1975 Computer Sciences Dept. Yorktown Heights.D 4 Yang.D.351-384 1970 . 130. ACIA Congress on Simulation of Systems.Y. Computer Aided Design and Electronic Circuits No.Simulation of Large Asynchronous Logic Circuits using an Ambiguous Gate Model. A.A.Information System Theory Project.R 4 Dletmeyer. Computer March 1975. IBM Watson Res.C. 4 Friedman A. . 4 Yau S. McGraw Hill. A .Diagnosis and Reliable Design of Digital Systems Pitman 1977.A note on Three Valued Logic Simulation IEEE Transaction on Computers Vol.S. . 1976.Adaptive Random Test Generator. .A. Heath.G. IBM Report RC 5769 (25006) Dec. 4 Baker.464-471.W.Designing Logic Machines using State Machines. CAD.M.E. on Computer CI8 1969.The PMS and ISP Descriptive System for Computer Structures. D. . Tech IEE Conf. SJCC 36 pp. Design Automation and Fault Tolerant Computing J _ p. Centre. G. Aug. Report No. IEEE Trans. S. . J. AFIPS SJOC 21 pp. Proc.300 Bib Iiography Speci f¡cations G. Lewi n. IEEE Trans Friedman. 1976 Roth.737 1964. MUSGRAVE Holt.W. No.E. E.1971 Breuer M. AGARD Conf. .. .Computer Aided Digital Design and Analysis Using Register Transfer Language. RADC-TR-68-305.R. Szygenda 4 Thompson . Iverson. H. C C . IEEE Trans.L.A Deductive Method for Simulating Faults in Logic Circuits IEEE Trans on Computers Vol. . C-21. N.Methods used in an Automatic Logic Design Generator (ALERT). J. 730 . 4 Simulation Breuer M. D. Repository R76-68 Parker K. Schoor. Proc.P.P.Digital Logic Simulation. Newall. Fall Joint Computer Conf.A Common Language for Hardware and Software Applications. T. Applied Data Research. K. . T.62-83.B. . .830-861. Electronic Computer EC 13 pp. . May 1972. AFIPS. ChappeI S. D.Developments in Diagnosis.345-351 1962. Armstrong. 86 1972. New York 1973. D.A Digital System Design Language (DDL). F. C2I April 1972. .17. Microprocessors. Grooves add new dimensions to VMOS structure and performance. D. 4 Williams T. 1978.C.3. Injection Logic Boosts Bipolar performance while dropping cost. Proc. Here come the big. Feb. Electronics.. 50. ibid 237. Scientific American 237. IEEE Issue on Microprocessor Applications. Re ITT CEC T/3/77. 30 March 1978 Holdt.110.E.22.3.3.7. 9th June 1977 Electronics.M.. Electronics.22. Electronics. 51. Special Issue 'A nnual Technology Update' Saget.. Altman. 50.94. D.3. Five Technologies Squeezing more Performance from LSI Chips. 1973. I. 50.163. J.7.96. density records. Mayo. ibid 237. 18th August 1977.17. 27th October 1977 Altman L. Jenne. Sutherland. ΗMOS Scales Traditional Devices to Higher Performance Levels. ibid 237. Microelectronic Memories.N. 15th Annual Design A utomation Conference June 1978. Electronics. Microelectronics Computer Science.192.M . Electronics 50. Terman.W.17.107. Dense Interchangeable ROMs Work with Fast Microprocessors.82. Oliver. L. 4 Cohen.3940 Mystery Computer Unveiled Electronics International. Microelectronics. 4 Carver.3. Japan Presses Innovations to Reach VLSI Goal. Electronics. 50.Α. 18th August 1977. The Fabrication of Microelectronic Circuits. ¡bid 237.G. Microelectronic Circuit Elements.3. R. VMOS Configuration Packs 64 kilobits Into I75mi 12 chip. p. 4 Yu.W.D. 30th March 1978 Altman L. ¡bid 237.63 Meindl. A.3.S.B. Altman L. B.130. March 30.211.R Cell Layout Boosts Speed of Low Power 64K ROM. 18th August 1977. The Role of Microelectronics in Instrumentation and Control ¡bid 237. Toong.D.210 Kay. Luxemburg. Pashley R.B.94. Electronics. J. The Large Scale Integration of Microelectronic Circuits. et al. March 29. Noyce. new 64K ROMs. F. 30th March 1978 Wilson.5152.A. June 1977 Yamada A.146. C L .3. H. ¡bid 237. Eichelberger E.17.12. 14th Annual Design Automation Conference. p. Oldham.107. 50. 1978. The Role of Microelectronics in Communications.3. p.EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE 301 Technology Trends 'CAD Electronics Study' for the Commission of the European Communities. 27th October 1977. et al Automatic Systemlevel Test Generation and Fault Location for large digital systems.70 Koiton. W. Electronics. R. Electronics. p. R. T. A logic design structure for LSI Testability Proc. ¡bid 237.91. ¡bid 237.347 Gate Arrays taking over in Logic using ECL Electronics International. 51. The Gathering Wave of Japanese Technology. W.99 9th June 1977.3. L.12.180. et al. ¡bid 237. 18th August 1977 Sander W.3. Microelectronics and the Personal Computer. 50.C.111 Hodges. Proc. Electronics 51. 50. The Role of Microelectronics In Data Processing. . Greene. New MOS Processes set speed.104.92.7. P r o c . . The microprocessor and ¡ t s a p p l i c a t i o n . 1976. I n t l . Hyla G. e t a l . Nakano T. Brooks. Conference D i g e s t . p . February 1978. 7 . D . 12 No. Cambridge U n i v e r s i t y P r e s s . ESL and p r o j e c t MAC. D esign of h i g h l e v e l language o r i e n t e d p r o c e s s o r s .E. N a t i o n a l Computer C e n t r e .J. An I n t e g r a t e d Hardware Software System f o r Computer Graphic In Time S h a r i n g . R . 1 1 . 1 2 . F. p. UK F l e g e l H. D i g e s t of IEE Colloquium on High Speed C i r c u i t s and Techniques. J . Frankfurt. Proc. VD MAProc. D i g e s t of IEE Colloquium on High Speed C i r c u i t s and Techniques. North H o l l a n d P u b l i s h i n g . February 1974. MASS. Proc. W. ISSCC Conference D i g e s t . MIT Cambridge. . 6 2 6 3 .6465. L . 2 . s t a t i o n . Ρ Computer a r c h i t e c t u r e and n e t w o r k s . E. e t a l . Gelenbe. High Speed Programmable Logic A r r a y s . High Speed A p p l i c a t i o n s o f t h e CD I P r o c e s s . CAD m i t H i l f e von Computers Graphics Systemen. 200 Gate ECL Master s l i c e L S I . . Cherance. C t r e n d s . P. M. ISSCC Conference D i g e s t . Amsterdam.1976. MUSGRAVE Masakl Α . Towards more e f f i c i e n c y computer o r g a n i s a t i o n s . Nov. Cambridge.P. 4 T a y l o r F. Euromicro Symposium. D ecember 1969. 2 .E. An Overview of Microcomputer a r c h i t e c t u r e and s o f t w a r e . Why d i s t r i b u t e d computing. IEEE Joseph. F l y n n . 4 M a h l . Soc. Computers and networks COMCON 76. V e n i c e .. A 920 Gate M a s t e r s l i c e . Jan 1977. Sigplan Down J . AFIPS SJCC ' 7 2 . VD MAProc. F r a n k f u r t . A r c h i t e c t u r a l c o n s i d e r a t i o n s f o r general purpose m u l t i p r o c e s s o r s 13th IEEE Comp. Conf. Braecklemann. p . I . ISSCC C o l a ç o .302 G.Notices V o l . Computer A r c h i t e c t u r e A p s i r a l l . Widdoes. S. 1980 some a r c h i t e c t u r a l 1978. February 1977. A M a s t e r s l i c e LSI f o r Subnanosecond Ramdon L o g i c . I . Aufbau und E i g e n s c h a f t e n s c h l ü s s e l f e r t i g e r CAD 1978.108109. S. p . November 1976. Hoi l o c k . and as a consequence. Musgrave. Brussels 6 Luxembourg. AND CANADA A.S. The objective of t h i s paper is to highlight the CAD a c t i v i t i e s reviewed in the survey. EEC. COMPUTER-AlVíV DESIGN oí digital electronic circuiti and systems North-Holland Publishing Company ©ECSC.G. Carter Technical Services Manager Engineering Department Plessey Radar Limited Addi estone INTRODUCTION Over twenty establishments were v i s i t e d in the survey. EAEC. help to stimulate cost-effective applications of integrated CAD systems in industry. editor.H. both as suppliers and users of CAD systems and universities covering research and development. 1979 EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS SURVEY IN U. They were chosen to provide a representative mix of industrial a c t i v i t i e s . 1 2 3 4 5 Start-up costs are high Need to know more about existing packages Package evaluation information needed Need to understand reasons for existing methods before applying CAD Need to change existing methods in order to optimise CAD system 303 .A.1 Survey Interviews Awareness of CAD/CAM System Performance An Integrated CAD/CAM System Approach New Design Criteria and Equipment Practices Benefits of a CAD/CAM System Integrated Data Base for Engineering Manufacture and Test SURVEY INTERVIEWS Common Themes from Survey Interviews From the grass roots information gathered from the survey discussions a number of common themes emerged. The main topics covered in the paper are:- 1 2 3 4 5 6 1 1. up to and including their top executives. CAD/CAM is Automation and Control in Engineering Production and Test but there is little awareness of this potential in most companies. They integrate their . The impact of CAD/CAM when effectively applied is so great that a programme of education is essential if all functions in the business are to contribute to achieving the CAD/CAM implementations and the resulting benefits. 'If only there was available an analysis and appreciation of existing CAD/CAM packages and the benefits and problems in the hands of the users. it was essential at all interviews to hold detailed general discussions with those visited lasting several hours. Existing Packages The existing packages can be divided into two broad categories: (a) (b) 2. The questionnaire was very valuable and at the end of the general review it stimulated further discussions. hence they invest large sums of money. but w i l l not make t h e i r systems available outside t h e i r company. CARTER Implementation of CAD had major impact on design manufacture and test operations Major benefits quoted were common One result of establishing many of these common themes was to see the need and initiate this symposium 1.H. 2 2.1 AWARENESS OF CAD/CAM ACTIVITIES AND SYSTEM PERFORMANCE Awareness It was soon apparent that even though the USA holds national and international symposiums on CAD/CAM there is still a great need to improve the awareness of individuals who are working in CAD/CAM of the current state-ofthe-art. and particularly Design Engineers who use the systems. It was obviously important to hold discussions with a range of people inside a company. Many key specialists commented. it would be extremely valuable'. both CAD/CAM Managers and staff. before attempting to formally complete the questionnaire.2 Interview Methods Although a considerable amount of time and effort had been given to establishing a comprehensive questionnaire.304 6 7 A. NOTE: Automation is usually related to production and test.3(a) In-house only Commercially available In-house Systems 2. 2.3 Some very large companies know how extensive the impact of CAD/CAM is to t h e i r competitiveness.2 Education and Training Inside Companies Those companies that are most advanced in applying CAD/CAM and are reaping the benefits have used considerable time and money to ensure training and education of all their staff. the source codes of these systems are generally not available to users and therefore they cannot be enhanced or modified by the users to improve effectiveness. and again it is essential to get the designs right first time. design verification. and auto-test pattern generation modules. This is vital in achieving a design to meet requirement specifications. Another major drawback with proprietary software is the difficulty of integrating packages from different sources and achieving computer machine transportability.3(b) Commercially Available Systems 305 Most companies use commercially available systems. This type of integrated CAD/CAM approach is now being made in the USA and as a result major benefits in time. before submitting them to a costly and time-consuming manufacturing process. The Design Engineers found that the in-depth analysis they obtained resulted in them changing their designs to eliminate design errors and improve the testability of their design.EUROPEAN COMMUNITIES STUDY: SURVEY IN USA AND CANADA System with their own equipment and design practices. in other words. In one company. The horizontal functions cover system simulation. especially now. most PCBs have typically 50% of their ICs as MSI devices and hence there are many very complex PCBs. The vertically connected packages typify how packages could be linked/integrated to cover the complete range of PCB CAD/CAM operations from design through to manufacture and test. Having used these modules. analyse and verify the designs. 3 3. 3. These systems will be in considerable demand as industry becomes more aware of their advantages. and production and test techniques and procedures are modified to optimise the use of CAD/CAM Automation Systems. Figure 1 illustrates an integrated CAD/CAM System. United States companies are using these simulation and analysis modules in this dual role and for the same basic reasons. and custom LSI applications. synthesis. In the diagram the vertical functions are applied to PCBs. the designs must be right first time. The same principle is true for PCBs. .2 Integration of PCB Packages In Figure 1 the flow diagram is shown in the form of a cross. 2. They establish their own in-house design criteria. cost and quality are being achieved.1 AN INTEGRATED CAD/CAM SYSTEM APPROACH Simulate. Clearly when designing for customer LSI it is essential to simulate. However. they also found that the depth of testing that could be achieved with these modules gave an insight into their own designs which was far superior to their normal manual method of producing test patterns. Analyse and Test PCB and Customer LSI The Logic Simulation. was the shortage of staff to produce complex test patterns to apply to their custom LSI designs before submitting these designs to manufacture. analysis. form the central part of an integrated CAD/CAM system. The point that is important to note here. is the need for simulation packages to be applied to PCBs and custom LSI designs. a critical reason for using the simulator and auto test pattern generation packages. The packages that are used to achieve this Right First Time capability are themselves continuously verifying their activities and the design data they are processing. percentages testability will be established at the design stage and formally quantified and documented. CARTER A major impact of this approach is that some projects no longer build prototypes to verify their designs before producing their first production PCBs. because there is often no large quantity production that can justify expensive time and money investment in producing the product. particularly with Custom LSI. is the key to eliminating all errors. many controls (or lack of) and procedures. There is a distinct advantage to be gained by using the modular approach illustrated in Figure 1. and correction at many different stages in the process. The logic simulation and hence design analysis and design verification can be done by the logic designer before submitting his design to a specialist organisation for manufacture of Custom LSI. Test and Quality of all three functions. Density of I/Cs per board area and number of pins allocated for achieving high percentage testability will have to be established as part of the design criteria. 3. There is no way of building a representative prototype and therefore CAD/CAM is essential to establish Right First Time. 4 NEW DESIGN CRITERIA AND EQUIPMENT PRACTICES As mentioned earlier CAD/CAM is a form of Automation that spans Engineering Design. . For example. must be changed. Manufacture.H. As in most automation. The right-hand part of the horizontal flow diagram in Figure 1 typifies the use of CAD/CAM packages to produce the Custom LSI devices. The specialist organisation can benefit by partitioning his CAD/CAM requirements into three modules:1 2 3 Circuit simulation Placement and layout Process simulation Because the techniques can change in each of these areas and it is noticeable that major packages have been developed basically keeping these areas as independent although related to each other and hence need of modular inter1 inking/integration.3 Integration of Custom LSI Packages Producing LSI devices is obviously an expensive design manufacture and test process. Stage-by-stage verification.306 A. Design verification will include worst case tolerance and hence improve design quality and reduce cost of ownership in the field. The ability to achieve Right First Time is vital if significant time and money is to be saved by using CAD/CAM. in order to optimise its use. This reduces the cost and time of inputting into a number of autonomous (stand-alone) packages.1 BENEFITS OF A CAD/CAM SYSTEM Benefits. Cost and Quality 307 Companies that are most advanced in CAD/CAM applications claim the foremost benefit is a drastic reduction in design manufacture and test cycle time. Time. However. It also means that all changes to the prime data are only done once so avoiding the danger of having prime data not up-to-date in a range of isolated autonomous systems. to sell a given product Increased profits Cash flow improvement Improved accuracy of predicted cycle times and cost estimates Worst-case design analysis Reduction in fault diagnostic costs Reduction in cost of ownership to customer Reduction in operational down-time INTEGRATED DATA BASE FOR ENGINEERING MANUFACTURE AND TEST A CAD/CAM System virtually captures the bulk of the prime engineering design information. Cycle time reductions of this magnitude obviously bring major benefits. and these figures are based on projects having been completed using an Integrated CAD/CAM System against estimated times based on past experience of similar project completion times before the application of the CAD/CAM System.EUROPEAN COMMUNITIES STUDY: SURVEY IN USA AND CANADA 5 5. changing technology has produced mini and micro computers that provide considerable computer power relatively cheaply and this in turn enables EDP systems to consider distributed mini computer systems . EDP Data Base Systems have been established primarily on large Main Frame computers. Studies since the USA visit have shown that in the PCB area the integration of the various packages will enable the basic design data to be input only once and this information can then be used across the packages. Some of the general benefits of CAD/CAM systems are as fol lows:1 2 3 4 5 6 7 8 9 10 11 12 13 14 6 Increased capture of market Improved control of design criteria and equipment practices Establishment of Engineering Design Data Base Improved interface between Engineering Production and Test Improved quality and testability Increased productivity/man Increased period. In the EDP world. in fact a Prime Data Base is established. Companies quote 30% to 50% reduction in the cycle time from receipt of contract to customer acceptance. H. . new design criteria and equipment practices will be needed. There is therefore a major incentive to link/converge the basic CAD/CAM and EDP systems together. In the future new packages should be designed taking into account the need to integrate with other packages and ideally the source code should be available to users. CARTER coupled to main frame. This means a new generation of interactive EDP systems using mini computers is now being established. Clearly. The USA/Canada visit provided very valuable information.308 A. so that they can enhance the systems to meet their own special requirements. There will be a continuing need to establish the relative performance of CAD/CAM packages and take into account the ease or difficulty of integrating these packages. CAD design analysis requirements capture the basic Engineering Data Base up-stream of the EDP systems. 7 CONCLUSIONS The survey proved very successful because it was interactive. The objective of the symposium is to provide a high degree of interaction between delegates and speakers and highlight the prime packages. their performance and their use by major parts of the industry. The CAD/CAM Integrated System approach is also based on using mini computers in a batch and interactive mode. The application of CAD/CAM to industry can significantly cut the cycle times from receipt of contract to delivery. In order to create cost-effective applications of CAD/CAM systems in industry a major impact will occur in design manufacture and test. 1 .0 AUTO TEST GENERATOR il 8YSTEM SIMULATION CUSTOM L S I ROUTE n LOCIC SYNTHESISERS è LOCIC SIMULATION c c \ CIRCUIT \ SIMULATION SEMI RUTO LAYOUT H PROCESS SIMULATION v E RUTO PLACEMENT TO ROUTING \ \ \ \ 1 PC ROL TE . D«™ BABE BUSINESS DATA BABE 3» 3» INTEGRRTED C ñ D / C ñ M SYSTEM ENGINEERING DfìTR BñSE FIG.' I RUTO TE ST F ENG. . TECHNICAL FORUM Chairman: Jacob VLIETSTRA. Philips. The Netherlands . Eindhoven. . Raytheon. ICL. Siemens. even though the individual implementation varied considerably.) should be placed on the data base and those who advocate that every user should have the right to place any component in the library. At the same time it was appreciated that the same group wanted degrees of freedom which cannot be tolerated with such incredibly large complex systems. Musgrave. COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Company © ECSC. Philips Eindhoven Luther Abel. There were contrasting views between those who believe there should be rigid control of what data (usually meaning component information. The former group who either used a 'vetting' committee or quality control engineers from the purchasing departments agreed there were problems on two fronts. which prompted discussion and comment from the conference audience. man machine interfaces and discipline versus creativity. USA Roy McGuffin.G. The arguments presented centred on the necessary knowledge and understanding of the required model by the user for a particular application. West Germany Introduction Each member of the panel had presented papers earlier in the symposium in which they had indicated the extensive use their respective companies are making of CAD facilities. There was a body of opinion who believed that the design staff . Brussels S Luxembourg. The former being more flexible and the latter being more strictly controlled. and as the quality assurance departments never used the data base there was little concern about its accuracy and up-keep. However. editor. Database Management This was recognised by many participants to be an important key to the success of any CAD application. The rigidity tended to stifle innovation from the designers on the one hand. USA Fred Hembrough. This was certainly a problem area highlighted in the survey study and agreed by the participants that many people used packages without the necessary understanding of the models used. source etc. There were other stratas of the data base which can be best described as functional levels. function. thus it was necessary to have separate simulation data bases. geometry. having a separate simulation library caused problems of how to keep them in synchronism. Thus many favoured the engineering compromise of having local data bases and global data bases. EAEC. For convenience and clarity it is best to summarise the discussions under the basic problem headings of data base management. UK F. 1979 TECHNICAL FORUM I Chairman Panel Jakob Vlietstra. even if it was from the 'radio shack' round the corner.'after all they cared! 1 . EEC. Klashka. electrical parameter. DEC. Although many 313 . system integration. For example the component information was considered by some to be distinct and separate from simulation information. In general there seemed to be acceptance of the difficulty of allowing flexibility to the users and retaining discipline and control of the data base.the primary users were the people who should control the data base . As advocates of CAD the chairman immediately asked for confessions! What problems had they had with their various CAD Suites? This resulted in some very frank statements from the speakers. mechanical parameters. VLIETSTRA of the technical aspects of the data base had been solved. A whole range of 'integrated' systems was cited. It was further considered desirable and possible to rationalise the manufacturing process using CAD to cope with large as well as small digital systems product. Certainly some members agreed that component libraries were very important and they were absorbing a great deal of time and effort which could be shared. it was necessary in one extreme to use different input languages for different parts of the suite with further languages for editing. machines. this is particularly important when there are so many applications with ill-defined problems. A number of experienced CAD specialists did emphasise the importance of having a planned overall structure to the whole CAD suite. . but that it was essential to do a step-by-step development. Nevertheless the company was able to enjoy many of the advantages of CAD while it progressed the system integration. tapes. He went on to say his company have a project to make CAD software 'User Friendly'. it is most important that they know what can and cannot be done (using CAD) for their project'.).C. on the management side there was still a great deal to be learnt. Fred Hembrough summed it up when he said Optimistic and pessimistic views (of CAD potential) are bad particularly when held by managers. etc. The education of users was the subject studied in the EEC project.C. N. for example.) A number of speakers were keen to emphasise the importance of ensuring that the outputs from the design programs were able to drive the manufacturing aides (mask plotters. The centralized coordination was considered to be important particularly when the company had multi-site working with the often further disadvantage of different host computers. A. It was evident that once CAD had gained status in project terms the task of the central CAD unit was considerably easier in many ways although their work load increased as the demand for their services grew. N. a phrase that many users hope will be practised by the CAD program creator.E. automatic wiring machines. must be compatible with existing packages.E. etc.) without further processing/translation. Consequently more companies will be purchasing programs which. (Surprisingly there was little formal discussion on the portability of software although it was a topic of frequent discussion at informal sessions. In fact both aspects were discussed in the forum by the panel and speakers from the floor all of whom indicated the importance of both aspects of system integration. System Integration This subtitle can have two meanings 1. There is a growing probability that no one company will be able to develop inhouse software for the many applications of CAD to its business if it has not already got a substantial investment.T. By defining the problem/application in small steps and solving it before moving to the next one. The integration of the user with the software available had been mentioned in connection with a number of papers presented at the symposium. 2. the integration of different software packages to form a CAD suite the integration of CAD with the user and the manufacturing process.. data base and manufacturing machines (A. you gain the confidence of the user and thus ease the widely expressed problem of gaining acceptance of design programs amongst designers.314 J. to be effective.T. However. with an extra component overhead of 20-25% but achieving testable designs. Some companies. Discipline Versus Creativity One of the dominant problems throughout the symposium was that of coping with the ever increasing complexity ("doubling every year for the next eight years"). There was certainly a difference between the USA and European companies in that the former had a much greater use of graphical interfaces which supported their belief in interactive working with many of the CAD programs. Certainly a number of people from different industries felt that partial solutions could only be achieved by working from the top-down in a structured CAD environment. had opted for discipline/design rules which the designers had to adhere to. This provoked considerable exchange of views on the balance between creativity flexibility complexity ν ν ν rules control testability and the list may be extended. Roy McGuffin pointed out that we did not understand the Nature of Design. . A Hungarian member indicated that an optical scanning process showed considerable promise for input of diagrammatic information. notably IBM.TECHNICAL FORUM I Man Machine Interfaces 315 In general many of the speakers indicated that there were a number of projects aimed at improving man's aids to communicate with the machine. The Scandinavians seemed prominent in this field. it was pointed out that perhaps the problems of man/machine interfaces were over emphasised because although the old generation of design draughtsman may not like the new technology. the younger generation 'know no different' and therefore accept it and work efficiently with it. and voice input. 'pen and paper' input to accommodate the creative design environment. some of the projects cited were special purpose designed machines to support computer graphics for CAD. but what was apparent was that no one could offer the panacea. so how could we produce successfully aids to creative design. . Test what you can test for and redefine the problems which are outside your testing capability. He asked the VLSI designers to consider more regular structures in their design so that partitioning could allow effective synthesis to be practised. This collaboration was very important because the problems of getting the models correct for the application program cannot be solved by the systems test engineer alone. The most important strategy is to design for testability. The problem of testing sequential circuits was very old. IBM. Lewin painted a very pessimistic picture. Litef. In his opinion no company could afford to be without automatic generation of test stimuli because there were always going to be design changes and the present product lead times do not allow for manual regeneration of test waveforms. 11 years. In contrast Prof. UK R. West Germany Mel Breuer. b. EAEC. Breuer took a more pragmatic view. Brussels ai digital electronic circuits and systems North-Holland Publishing Comapny editor. Also as the VLSI models more effectively at the silicon level rather than the gate level. 1979 TECHNICAL FORUM II Chairman Panel Jakob Vlietstra. Cliff Gaskin has an optimistic outlook. the problems in testing are being solved every day and very often with the close collaboration of component manufacturers and users. ensuring observability and control ability are available. USA Introduction The panel were chosen for their contrasting views on the problem of testing and these were clearly stated when the panellists summarized their views and philosophy in their opening statements. He finished on an optimistic note by saying that many of the present problems could be solved if the designer took responsibility for testing. Philips Eindhoven Cliff Gaskin. c. and he could not cite any real progress in that time. how can it be put to good use as there are no system design tools. In the design stages use CAD programs to give feedback.G. However the worst aspect of this technology advance in Doug Lewin's opinion was at the total system level. a. appreciation and understanding of the software as well as the component model was important. University of Southern California. Prof. no system specification and consequently no possibility of 317 . EEC. COMPUTER-AIDED DESIGN S Luxembou/ig. If your application program works as specified. to achieve these characteristics. how do you use VLSI. Use the whole range of techniques such as added logic. if it does not then the chip is sent back. Here the problem is extremely difficult and there is no universal solution to this only a number of strategies all of which should be pursued. Brunei University. © ECSC. Schauer. then the logic design tools should work at this level including the test pattern generation algorithms and fault models. USA Doug Lewin. three operating modes etc. Musgrave. Testing problems really do not exist at the SSI and MSI levels only in LSI and VLSI technology. The chairman asked for comments from the floor hoping the audience had not been too fractured by the diverse views expressed by the platform. USA) considered that Mel Breuer's suggestions on redefining the problems were just not acceptable because most of the present designs just cannot be redefined. Bennetts (Southampton University. He went on to share Cliff Gaskin's view that industry will cope. Schauer immediately seized on this latter point to show how IBM did not have these impasses. UK) who felt moved to present his thesis on testing. VLIETSTRA testing at this non-specified total system level. Software designers could cope with complexity because they were their own masters and could do a top-down design using structured programming. aerospace and nuclear. Perhaps the only agreement that could be seen in this session was the necessity to ensure that the designer took responsibility for testing very early in his design strategy. When design engineers work in an interactive mode on test generation they can use their full engineering intuition and achieve results. and accept fault tolerance as a design criteria. In the electronics industry we needed to take note of the philosophy prevalent in high risk industries.318 J. . if only to satisfy his customer. IBM make their own components and thus their system designers have the required detailed component specification fault modes and diagnostics. engineering change will go on and that the problem of testing needed a flexible environment for the designer. Dr. The panel did not believe VLSI development would be particularly hampered. they were only seeking short term solutions. Paul Roberts (SMC. Summarising his views he considered that people were not looking at the real problem. But the system hardware designer could only go part of the way down the design structure before he depended upon the component manufacturer. It was Dr. Other speakers asked whether the testing problems will effectively halt the development of VLSI and could this be a reason why custom LSI manufacture was increasing. nor did they believe this was a reason for the growth of custom LSI as any sane manufacturer would insist on some testing. BIR.E. Belgium . Brussels.C.EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE Chairman: S.. E. . 321 . Standardisation. COMPUTER-AIDED DESIGN FINAL SESSION EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE Chairman Speakers S. The component data bank .S. Christopher Layton briefly explained to delegates where this type of work fitted in to the EEC structure and plans. and Application support scheme (32. EAEC. editor. Essentially the action on Data Processing has a four year programme with unanimous support of member states and this has two specific headings: a. De Man also took the opportunity to point out that procedures for initiating and then completing the study had taken some four years and that with the evolution rate of technology it was essential that future decisions in this area were taken quickly and acted upon fast.000. He did however. Engineers and managers were reminded that as with any programme there were competing claims for resources and that any decisions to favour one proposal instead of another would be taken on a broad base and not necessarily on narrow technical merits. De Man Head of DP Projects Bureau. De Man to hear from the delegates their views on what direction further work in this area should take. The CAD electronic field had implications under both headings and therefore fitted within the spectrum of work. EEC. Prof. 1. He pointed out that the report on the project had given priorities to this list and developed detailed business plans for a number of the proposals but that the list (Table I) presented here did not reflect these findings. implementation and thus allow national procurements to deal with the finer details. De Man reminded the audience of some of the proposals for future work that the study had revealed. EEC Chairman of Technical Committee of CAD Electronics Study The Chairman explained how at a very early stage of the CAD study project it was recognised that a European symposium on the subject would be useful to community members and provide informative feed-back to the Commission.a good application area which was highly eligible and there was political concensus for this and hence resources. Brusieli t Luxembourg. Musgrave. b. Layton Prof. Layton then made a number of comments relating to the proposals of Table I. Mr. EEC Directorate General for Internal Market and Industrial Affairs.000 Europe accounting units). Prof. wish to make it clear that the standardisation was not an effort to close Europe but more to provide common I. 1979 G.O. Bir C.oí digital electronic circuits and systems North-Holland Publishing Company ©ECSC. The purpose of this session was to give delegates the opportunity to hear from the Director Christopher Layton just how this particular project area mapped into the overall strategy of the Commission and for him and the Chairman of the Technical Committee. Prof. H. there was a despondent note to Mr. Education . 2. Bir on behalf of the EEC. The Chairman then invited questions and comments from the delegates.T. namely that the first major task to learn to apply the stateof-the-art technology is that up to 50% must be really viable. he said the community has not grappled with education and that there was a need for much greater discussion at member state level and possibly a need for more initiative in the educational field. Layton expressed a special thanks to the contributors from outside the Community and a general appreciation to all those who had participated was extended by Mr.322 2. . Delegates were invited to write to the Director if they wished to express further views on the EEC perspective. He finally placed CAD of Electronics in context with VLSI systems and pointed out that the EEC like member states and the United States recognised the new radical range of problems this technology threatened. the most dominant theme was that of the problems with creating and maintaining a common data base with appropriate CAD component models. It was Mr. S. Bir who answered this by explaining that a great deal of information had been solicited from companies under an undertaking of confidentiality.T. 4.Euronet . In general there appeared to be support for the data base proposal particularly if it included a six month study which would answer some of the problems cited in the papers and discussion at the symposium. Teer's opening remarks were cited. often obtained by 'negative engineering' (slicing open chips). Mr. In fact some members of these groups were attending this symposium and were benefitting from the international discussion of the CAD aspects of VLSI.'s. BIR Data Communication Network . One delegate Mr.although community support would be desirable Mr. There were those delegates who took the view that there were many models already available within European industry. It was Mr. Applications of VLSI CAD of VLSI. Equally the study took in a spectrum of views and it was this overall spectrum and the more general aspects which should be concentrated upon and releasing more details would only cause deviation from the larger problems and goals. Layton's comments here. 3. Gaskin felt that there should be a European spirit and he for one would consider making Litef's models available to the community and if a number of companies did this the proposal could quickly provide a needed service. Product Areas .This proposal clearly had implications which could provide the necessary catalyst for further development but would need discussions with community P. Although it was recognised that if one vendor did then it was likely that many would follow. Many spoke giving views and experiences as well as asking questions. The EEC had created two groups to study and report on 1. Patrick (GEC Marconi UK) who expressed the disappointment of many delegates that not enough information was released about the findings of the study. Many delegates considered it unlikely that component manufacturers could be pressurised to provide the necessary details for effective models to be generated. needs f o r sharing expert knowledge in the community i n order to make retraining e f f i c i e n t and e f f e c t i v e . 4. Procurement Lobby by c o l l e c t i v e pressure on component suppliers more d e t a i l about components could be forthcoming. Many of these standards would be established de facto by network system as in 4. Standards establish a comprehensive set of standards f o r European use of CAD. In some cases j o i n t projects to establish data characteristics suitable f o r component modelling f o r CAD could be established. O P P O R T U N I T I E S 1. E.EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE 323 T A B L E I E. Education to influence constituent governments to influence education and t r a i n i n g courses to r e f l e c t the impact of the d i g i t a l r e v o l u t i o n . The aim would be to allow t r a n s f e r a b i l i t y and interchange of both CAD packages and techniques and also of designed c i r c u i t elements and component models. Retraining to establish organisations and f a c i l i t i e s f o r the retraining of electronic engineers. Common Database and Network Communication establish a component model database and provide the necessary back-up service to maintain and up-date i t and give user advice. To t h i s end u t i l i s a t i o n of a European data communications network would be advantageous. . 5. C. 3. 2. . R. ROBERTS. LATTIN. JONES. A. K. DAVIGNON.H. J r P . K. M. P. J. K. H. PABICH.F. LEWIN. R. KANI. S. QUILLIN. F. TOMLJANOVICH. KLOMP. W. YAMADA. A.E. GASKIN. SZYGENDA.M.A. MCGUFFIN.2 6 7 149 123 255 149 183 187 41 7 103 207 183 103 325 . R. H. DE MARI. HOFFMAN.T. AVENIER. 303 207 3 81 247 173 123 229 187 103 133 217 169 25 91 237 13 149 . RAULT. JTC. WOLSKI. A. HEMBROUGH. J. L. TERAMOTO. D. Κ. G. H. M. E. COLANGELO. R.E. MICHARD.C. MUSGRAVE. C. B. F. J. LIPP.A. LOOSEMORE. SCHAUER. S. BREUER. TEER. M. DE MAN.I N D E X OF A U T H O R S ABEL.W. MUTEL. KLASCHKA. 139 149 57 CARTER. . . . . . . ISBN 444 85374 Χ CDNA06379ENC . Documents Similar To CDNA06379ENC_001Skip carouselcarousel previouscarousel nextChap006 - Process Selection and Facility Layout3D CARS v1 CatalogIasimp-qs013_-En-p Ab Kineytics Plc Based MotionNational CAD Standard.pdfUnit 17 CAD B11 Assessment P1of3 v4_Answer 1 (3)Advanced DiplomaIntro.pdfMEE522_1PM Presentation 20[1].032911 Enabling Document Compliance With SAP Easy Document Management for a Medical Device CompanyFinal Paper 2015315259135742009Ch01Ques-1Cad1211500CAD CAM CAE10 12 JET Vinkel AroraAggregate Planning (Written Report)ICAD 9.4 Product SheetRipac ApplicationLead Mechanical Design Engineer in Atlanta GA Resume Tatiana LagunaCompetency List CAD ArchitecturalMB0044 - Productions & Ops. MgmtHelp - Designing a CAD Assembly for Export __ Getting Started with Export (SimMechanics™ Link)Project 03CNCMinescape BrCNC Glossary of TermsCAE-shipp Hull FormsDELFTSprinkCAD FlyerArticle on 'Indian Mining Industry' by Chaitanya Raj GoyalMore From ongkingcozarijaneSkip carouselcarousel previouscarousel nextSubmission1. Banyan Tree Final ReportSingapore BIM Guide_V2Why Sky CityCut and FillGunung Emas Short Research PaperLand Title for Chang Ah Khim Land in Bukit Padang (1)Devt. of a Successful Resort DesignHardCover BrochureBIM Submission Guideline(v3-5) Jan10(Official Release)aeccadstdSingapore BIM Guide_V2Hillside Guidelines May202008Interior Design Student HandbookFooter MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK
Copyright © 2024 DOKUMEN.SITE Inc.