Report Microprocessor Architecture and Systems · WorkshopReport Microprocessor Architecture...

3
Workshop Report Microprocessor Architecture and Systems Mario J. Gonzalez, Jr. Northwestern University A Workshop on Microprocessor Architecture and Systems was held on May 6-7, 1976, at Northwestern University under the sponsorship of the Technical Committee on Computer Architecture and the Technical Com- mittee on Minis and Micros of the IEEE Computer Society. No formal theme was established, and no sum- mary positions were presented by the attendees, but the following conclu- sions appeared to emerge: (1) Multiple microcomputer con- figurations will become more promi- nent, but the cost-effectiveness of these approaches will depend on the application. (2) Reliability and fault-tolerance are driving functions in many of these designs. (3) A redistribution of resources can lead to, more powerful and effi- cient systems. (4) There is a need for an inte- grated, formalized design methodol- ogy. (5) Future systems will need more self-test/self-repair capabilities. (6) There is a need for better soft- ware development systems. Session highlights In a presentation on "MIMD Multimicroprocessors as Main Frame Replacements," Gary-Tiaden of Sperry Research Ce ter considered the re- placemnent of a large, medium-capacity September 1976 (approximately one million instruc- tions per second) multiprogrammed, uniprocessor mainframe with a collec- tion of microprocessors in a general- purpose, batch-oriented environment. The uniprocessor system was assumed to have a multiprogramming level of seven, an average user memory re- ouirement of 128K bvtes (for a total main memory size of 896K bytes), and a CPU utilization of 99.5%. Tjaden stated that the utilization figure was somewhat arbitrary and provided a measure to determine appropriate parameters for the multi- microprocessor (MIMD) system. In the multiprocessor system studied by Tjaden the collection of identical microprocessors shared a primary memory and a common I/O system. The model ignored considerations such as cost, contention, and overhead of the memory bus, the I/O bus, the operating system, etc., concentrating instead on high processor utilization (99.5%) within each of the micropro- cessors in the system destined to replace it. Multiprogramming in the MIMD configuration is accomplished by allowing the individual micropro- cessors to execute independent jobs. Upon completipn of a job or upon encountering a wait condition due to an I/O operation a microprocessor is switched to another job. In order to maintain high utilization it was neces- sary to add a certain amount of memory for each microprocessor added to the system. A queuing model was developed to estimate this memory size increase. Using estimated costs for the 1979 time-frame, Tjaden ob- served that for a given level of expendi- tures (for a 1 MIP uniprocessor), normal user jobs, and non-custom software, it is more cost-effective to invest the financial resources to develop a small number of high- bandwidth microprocessors than a larger number of less effective micro- processors. In the limit this number approaches one, i.e., a high-bandwidth uniprocessor. This conclusion is also valid for timesharing environments and systems with virtual memory. The primary reason for this, observed Tjaden, is that the cost of the memory required to maintain the desired high utilization dominates system cost. He did suggest that multimicropro- cessor systems may be more cost- effective than a ur4iprocessor in spe- cialized environments in which the nature of the complutational require- ments is known beforehand. Responses by participants to Tjaden's conclu- sions included the following: (1) If memory costs do indeed dominate, then perhaps memory efficiency should take precedence over processor efficiency. (2) Microprocessors should be considered as stages in a single- stream, pipelined CPU. Dan Atkins of the University of Michigan spoke on "Instructional Use of Bit-Slice Architecture LSI." The reasons for the use of such architec- 49

Transcript of Report Microprocessor Architecture and Systems · WorkshopReport Microprocessor Architecture...

Page 1: Report Microprocessor Architecture and Systems · WorkshopReport Microprocessor Architecture andSystems MarioJ. Gonzalez,Jr. NorthwesternUniversity A Workshop on Microprocessor Architecture

Workshop Report MicroprocessorArchitectureand SystemsMario J. Gonzalez, Jr.Northwestern University

A Workshop on MicroprocessorArchitecture and Systems was heldon May 6-7, 1976, at NorthwesternUniversity under the sponsorship ofthe Technical Committee on ComputerArchitecture and the Technical Com-mittee on Minis and Micros of theIEEE Computer Society. No formaltheme was established, and no sum-mary positions were presented by theattendees, but the following conclu-sions appeared to emerge:

(1) Multiple microcomputer con-figurations will become more promi-nent, but the cost-effectiveness ofthese approaches will depend on theapplication.

(2) Reliability and fault-toleranceare driving functions in many ofthese designs.

(3) A redistribution of resourcescan lead to, more powerful and effi-cient systems.

(4) There is a need for an inte-grated, formalized design methodol-ogy.

(5) Future systems will need moreself-test/self-repair capabilities.

(6) There is a need for better soft-ware development systems.

Session highlights

In a presentation on "MIMDMultimicroprocessors as Main FrameReplacements," Gary-Tiaden of SperryResearch Ce ter considered the re-placemnent of a large, medium-capacity

September 1976

(approximately one million instruc-tions per second) multiprogrammed,uniprocessor mainframe with a collec-tion of microprocessors in a general-purpose, batch-oriented environment.The uniprocessor system was assumedto have a multiprogramming level ofseven, an average user memory re-ouirement of 128K bvtes (for a totalmain memory size of 896K bytes),and a CPU utilization of 99.5%.Tjaden stated that the utilizationfigure was somewhat arbitrary andprovided a measure to determineappropriate parameters for the multi-microprocessor (MIMD) system. Inthe multiprocessor system studied byTjaden the collection of identicalmicroprocessors shared a primarymemory and a common I/O system.The model ignored considerations suchas cost, contention, and overhead ofthe memory bus, the I/O bus, theoperating system, etc., concentratinginstead on high processor utilization(99.5%) within each of the micropro-cessors in the system destined toreplace it. Multiprogramming in theMIMD configuration is accomplishedby allowing the individual micropro-cessors to execute independent jobs.Upon completipn of a job or uponencountering a wait condition due toan I/O operation a microprocessor isswitched to another job. In order tomaintain high utilization it was neces-sary to add a certain amount ofmemory for each microprocessor addedto the system. A queuing model was

developed to estimate this memorysize increase. Using estimated costsfor the 1979 time-frame, Tjaden ob-served that for a given level of expendi-tures (for a 1 MIP uniprocessor),normal user jobs, and non-customsoftware, it is more cost-effective toinvest the financial resources todevelop a small number of high-bandwidth microprocessors than alarger number of less effective micro-processors. In the limit this numberapproaches one, i.e., a high-bandwidthuniprocessor. This conclusion is alsovalid for timesharing environmentsand systems with virtual memory.The primary reason for this, observedTjaden, is that the cost of the memoryrequired to maintain the desired highutilization dominates system cost.He did suggest that multimicropro-cessor systems may be more cost-effective than a ur4iprocessor in spe-cialized environments in which thenature of the complutational require-ments is known beforehand. Responsesby participants to Tjaden's conclu-sions included the following: (1) Ifmemory costs do indeed dominate,then perhaps memory efficiencyshould take precedence over processorefficiency. (2) Microprocessors shouldbe considered as stages in a single-stream, pipelined CPU.

Dan Atkins of the University ofMichigan spoke on "Instructional Useof Bit-Slice Architecture LSI." Thereasons for the use of such architec-

49

Page 2: Report Microprocessor Architecture and Systems · WorkshopReport Microprocessor Architecture andSystems MarioJ. Gonzalez,Jr. NorthwesternUniversity A Workshop on Microprocessor Architecture

tures in an instructional environment,according to Atkins, were (1) theypermit non-trivial design case studieswith a small set of components; (2) theyprovide a vehicle for the study ofmicroprogramming and emulation;and (3) they cover the understand-ing of MOS/LSI microprocessor com-ponents, thus providing more designchoices and motivate understandingof RT-level design. Atkins discussedand compared the features of a num-ber of bit-slice architectures includingthe Intel 3000, Signetics 3000, Fair-child 9400, Advanced Micro Devices2900, Monolithic Memories 5700/6700,TI SBPO400, and Motorola 10800.In response to a question regard-

ing the future of bit-slice architec-tures in view of the trend towardwider word-length devices, Atkinsindicated that these architectures willpredominate in special-purpose envi-ronments and in those applicationsrequiring high performance. As anexample of an environment which canefficiently use bit-slice architecturesAtkins mentioned the area of processcontrol in which bit-oriented compu-tations is often required. A continu-ing problem and a limiting factor inthe utilization of these architecturesis the development of support soft-ware. This support has lagged thesupport that is available for fixed-length devices.The next two speakers, Ken King

of DEC and Jack Lipovski of theUniversity of Florida, presented differ-ent approaches to a common objec-tive: increased performance and com-putational efficiency through a reduc-tion of unnecessary interaction be-tween the components of a (micro)computer system. Both speakers alsoconcentrated on a common place foraccomplishing this improvement,namely the processor/memory inter-face.

Ken King, who spoke on "Dis-tributed Function Architecture,"advocated the placement of a familyof registers within the memory sub-system itself to facilitate addressgeneration for both instructions andoperands, and thus relieve the pro-cessor of these non-computationalfunctions. These registers could beinitialized by specific instructions inan enhanced instruction set. Such anapproach would be particularly usefulin a situation in which a stream ofoperands could be shipped to theprocessor itself. This involvementwould be limited to the initializa-tion of starting address and word-count registers in the memory. Suchan approach, for example, would relievethe processor of fetching instructionswhose only objective would be to incre-

50

ment an index register and generatean address for the next operand. Kingalso proposed solutions to the handlingof conditional jumps in such an envi-ronment. The implications of such anapproach in a microprocessor systemis that, as a result of a reduction inthe required processor/memory con-trol bandwidth, it is possible to gethigher performance out of pin-limitedchips by using serial flow betweenchips to set up the "streaming" envi-ronment. The use of a serial flowallows construction of simple net-works of processor and memory chips.King observed that approaches of thiskind are especially desirable in viewof the continuing reduction in thechip cost/interconnection cost ratio.Continuing advances in chip com-plexity also suggest the feasibility ofthis approach to a distribution ofsystem intelligence.

In his presentation, "On a Micro-processor Architecture for a Micronet-work," Jack Lipovski stated that aslight redistribution of microcomputerresources can lead not only to fastermicrocomputers but also to micro-computer systems in which virtualmemory, stack processing, multi-precision and vector arithmetic, andpipelined operations (primarily interms of overlapped instruction exe-cution and address generation) areobtained. He observed that servicinga page fault in a virtual memory sys-tem reduces to some overhead activi-ties plus the transfer of a fixed num-ber of words-a "packet" in Lipovski'sterminology. Lipovski noted that asingle memory chip can be configuredso that it contains exactly the in-formation which constitutes a page.Two registers added to each of thesechips could be used to implement astack pointer, a bounds register, anage-since-last-use counter, etc. Hepresented a scheme which permitschips being loaded with requestedpages to be temporarily switchedfrom the processor interface to theI/O interface, thus permitting the pro-cessor to access other active pages(chips) without interference from thepaging process.Lipovski observed that his approach

to virtual memory implementationin microcomputer networks would(1) allow the sharing of expensiveperipherals, (2) facilitate the imple-mentation of large programs, (3) takeadvantage of packet-sized transfersof information, and (4) lead to lowermemory costs.

Th.e presentation closed with thefollowing questions: Does a normalinstruction set really consist of twoinstruction sets, one for handlingdata and one for handling instruc-

tions? And what is the interactionbetween busses and programs? Insupplying his answers to these ques-tions Lipovski further elaborated onhis approach to a decentralization ofsystem resources.Geoff Leach of Sycor, Inc., spoke

on "Experiences with the Develop-ment of a Micro'processor Language."SYCLOPS (Sycor Language for Opera-ting Systems) was developed forSycor's microprocessor based sys-tems, the first of which is an inter-active data entry and verificationsystem, the Sycor 440. Leach outlinedthe reasons for Sycor's decision toreplace PT/M. From the corporatepoint of view the principal reason wasto develop control over Sycor's prin-cipal software tool. Economically, spe-cialization promised improved effi-ciency. In a technical sense the devel-opment of a new language permittedthe implementation of modern ad-vances in language design (i.e., bettercompiler writing and compilationtechniques) and usage (i.e., better pro-gramming practices).Sycor established the following lan-

guage goals for SYCLOPS:(1) complete control over mainte-

nance and the generation of extensions;(2) code efficiency should be as

high as possible;(3) the language should permit the

development of future systems;(4) the language should support

the use of microprocessors other thanthe one used in the development ofthe first SYCLOPS-based system(program portability outside of Sycorwas not considered as one of the lan-guage goals; and

(5) downward compatibility withPL/M.

Leach also discussed the resourcesrequired for compiler development interms of personnel, analysis tools,and development languages. After re-lating some of the actual extensionsto PL/M that are contained inSYCLOPS, he observed that languagedevelopment can be a worthwhileundertaking for a small company.Development of SYCLOPS at Sycorrequired 2-21/2 man-years using threepeople.John Tartar spoke on "Applica-

tion of Microcomputers in ControlEnvironments." Tartar, of the Uni-versity of Alberta, described a dis-tributed processor system as an imple-mentation of well-defined algorithmswith well-defined inputs and outputsmigrating outwards from a centralizedstructure to individual units. Such anapproach promotes reliability by assign-ing few responsibilities (perhaps onlyone) to each of the individual -units.He viewed one of these individual

COMPUTER

Page 3: Report Microprocessor Architecture and Systems · WorkshopReport Microprocessor Architecture andSystems MarioJ. Gonzalez,Jr. NorthwesternUniversity A Workshop on Microprocessor Architecture

units as a microprocessor with at most2K words of private memory.Tartar cited a number of problem

areas in this approach to real-timecontrol. The first of these is thenaive user who knows a lot about theproblem but has little or no knowledgeof microprocessor-based systems. Sucha user, for example, may not knowhow to react to error conditions andmay require extensive indoctrinationin the use of the controller. A secondproblem area is that of implementingthe algorithm in a cost-effective man-ner.In response to a number of ques-

tions Tartar made the followingobservations:

(1) Eight bits are satisfactory formost microprocessor-based controlsystems. Since the nature of the prob-lem is well defined and since there islittle need for floating point opera-tions, normal fixed point operationsgenerally are satisfactory. Sixteen-bitmicroprocessors will have a significantimpact on these environments, how-ever.

(2) In process control environmentsmany operations are of a logical(boolean) nature. It is expected thata microcomputer fvill be used to re-tain historical information, performcomplex algorithms, and thus permitsmoother projections.

(3) Microprocessors are being usedto replace minicomputer-based sys-tems. Thus at the present time,microprocessors cannot perform someof the more sophisticated operations(e.g., Kalman filtering) performed byminicomputers. Bit-slice architecturescan be used for these sophisticatedoperations, but the trend is towardsimpler rather than more complexnodes in distributed processor sys-tems. It may be, however, that largersystems will be characterized by morecomplex nodes.

(4) There is very little (if any)communication between the autono-mous outer nodes. Most communica-tion is between the central node andthe outer nodes.

(5) Most validity checking is inthe form of the central node and theouter nodes checking each otherperiodically.

Tartar concluded with the followingcomments: (1) Future developmentsystems should be more helpful tothe non-computer practitioner. (2) Moresuitable, environment-oriented lan-guages are needed, but the design ofthe language should not be left tothe control system designer. (3) Relia-bility and suitability to the envi-ronment are the two primary require-ments of microprocessor-based con-trol systems. (4) Future chip design

September 1976

should reflect the unique requirementsof various process control environ-ments.

"Projections of MicrocomputerUsage" was the title of the presenta-tion by Bill Dejka of the NavalElectronics Laboratory Center. Divid-ing future microcomputer systemdesigns into three areas, Dejka'spresentation concentrated on the firsttwo of these: (1) classical designsconsisting of a single computer,(2) dedicated designs consisting offrom 1000 to 10,000 computers, and(3) brainlike designs requiring mil-lions of computers.According to Dejka classical designs

will be found in areas such as con-sumer goods and small business sys-tems. These mass-production-orienteddesigns will need to pay special at-tention to support problems and thedevelopment of self-diagnosing com-puters.Dedicated designs will be character-

ized by the use of multiple micro-processor and bit-slice architecturesand by very complex software prob-lems, which will constitute a signifi-cant hidden system developmentcost. These dedicated designs, tailoredto human use, will appear in a num-ber of known high-usage applicationareas such as linear programming,regression analysis, file searching,and associative processing. Thesedesigns will be characterized by self-test/self-repair features and the useof sophisticated computing primitives(e.g., interface chips).In Dejka's opinion a very neces-

sary occurrence before these complexdesigns become reality is the devel-opment of a methodology which cancompare and evaluate various archi-tectures. Such a methodology shouldconsider more than performance andshould include functional integrity,contention for shared resources (andthe consequent creation of queues),reliability/maintainability, and life-cycle cost.Another important consideration is

serviceability and a reduction of the"throw-away syndrome" that hascharacterized recent technologicaldevelopments in general. Service-ability considerations should includebuilt-in diagnostics, stand-by redun-dancy, and a reduction of mainte-nance and the need for skilled techni-cians. Dejka foresees a need forn-dimensional architectures in whichbuilt-in testing is independent offunction processing. He also statedthat designers should get more infor-mation before finalizing a solution toa particular algorithnL Such a process,for example, should consider memory/hard-logic trade-offs.

Dejka summarized by stating thatthe two most important considera-tions for future designs are a micro-computer system design methodologyand self-diagnosing computers.The starting point of Bill Lennon's

presentation, "Integrated Design andWorking Documentation," was hisobservation that structural programdesign and top-down system designand implementation techniques haveproven their worth in the computersystems design. The NorthwesternUniversity professor qualified thisremark by stating that many of thebenefits ascribed to these currentdesign strategies are due in largemeasure to particular characteristicsof individual high-level languages;Lennon then reported on the defini-tion of an integrated design strategywhich is relatively independent ofboth target computer and program-ming language. This strategy can beused at the present time by non-specialist users of microcomputersrather than at some future time whenimproved languages for microcom-puters become available.

Recognizing the limitations of thehuman mind, the system designerconstrains himself to problem andprogram descriptions composed ofabout half a dozen relatively inde-pendent units. Details in each unitare then similarly described. Themodules thus described are docu-mented within a comment block in ahighly stylized fashion. Finally, afterall of the stylized descriptions arecompleted, actual coding in assemblrlanguage is begun.

Several advantages accrue fromthis technique. First, trivial pro-grams may be written to strip offthe comment structures from the pro-grams. Because of its convenientaccess, the "working documentation"keeps an accurate account of theprogram. Second, by explicitly de-scribing the logic of the programin a pseudo-high-level language, thedesigner has effectively written a"top down" program without thepsychological problems attendant towriting a program. Finally, as theindividual modules are coded andcombined into subsystems, the designerhas the distinct psychological ad-vantage of always putting togetherworking subsystems and getting rapidreinforcement as goals are constantlybeing met.Lennon discussed the details of this

approach particularly in light of theneeds of non-specialists who are cur-rently performing system design inthe burgeoning field of microcom-puter applications with either exist-ing or home-brewed tools.E

51