Embedded MCU Debuggers

81
Embedded MCU Debuggers by Chris Hills Edition 3.3 19 January 2006 Part 2 of the QuEST series quest.phaedsys.org/ [email protected]

Transcript of Embedded MCU Debuggers

Page 1: Embedded MCU Debuggers

Embedded MCU Debuggers

by

Chris Hills

Edition 3.3 19 January 2006

Part 2 of the QuEST series

quest.phaedsys.org/[email protected]

Page 2: Embedded MCU Debuggers

www.phaedsys.org page 2 of 81 19/01/2006

Page 3: Embedded MCU Debuggers

Embedded MCU Debuggers

Third edition (3.3) January 06 by Eur Ing Chris Hills BSc(Hons), C. Eng., MIEE, MIEEE FRGS

Second edition April 1999 by

Chris Hills Presented at

JAva C & C++ Spring Conference Oxford Union, Oxford UK , April 1999

For the Association of C and C++ Users, see www.accu.org

The slides and copies of this paper (and subsequent versions) and the power point slides will be available at or

http://quest.phaedsys.org/

This paper will be developed further. Copyright Chris A Hills 1999, 2001. 2005, 2006

The right of Chris A Hills to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and

Patents Act 1988

Quality Embedded Software Techniques

QuEST is a series of papers based around the theme of Quality embedded systems. Not for a specific industry or type of work but for all embedded C. It is usually faster, more efficient and surprisingly a

lot more fun when things work well.

QuEST 0 Introduction & SCIL-Level QuEST 1 Embedded C Traps and Pitfalls QuEST 2 Embedded MCU Debuggers QuEST 3 Advanced Embedded Software Testing QA1 SCIL-Level QA2 Tile Hill Style Guide QA3 QuEST-C QA4 PC-Lint and DAC MISRA-C Compliance Matrix

www.phaedsys.org page 3 of 81 19/01/2006

Page 4: Embedded MCU Debuggers

www.phaedsys.org page 4 of 81 19/01/2006

Page 5: Embedded MCU Debuggers

Contents1. INTRODUCTION 7

1.1. BASIC METHODS 8 1.2. ROM-MONITORS 10 1.3. THE CPU SIMULATOR 12 1.4. THE FIRST ICE 14 1.5. HOW DOES AN ICE WORK? 16 1.6. THE LOGIC ANALYSER 19

2. BRAVE NEW WORLD 21

2.1. BRINGING THE ROM-MONITOR DEBUGGER UP TO DATE 22 2.2. USER APPLICATIONS WITH INTEGRAL DEBUGGERS 24 2.3. MODERN SIMULATORS 25 2.4. MODERN IN CIRCUIT EMULATORS 27 2.5. NEW ICE FEATURES 35

2.5.1. Code Coverage 35 2.5.2. Performance Analysis 38

3. NEW METHODS & N-WIRE DEBUGGERS 40

3.1. ICE- CONNECT (HI-TEX) 40 3.2. HOOKS 41 3.3. ONCE MODE 43 3.4. BDM (MOTOROLA) 44 3.5. JTAG 47 3.6. JTAG PRE- HISTORY 49 3.7. AMDEBUG 52 3.8. ON-CHIP DEBUG (OCDS)-INFINEON 54

3.8.1. OCDS Level-1 54 3.8.2. OCDS Level-2 54

3.9. NEXUS 55 3.10. SUMMARY OF N-WIRE SYSTEMS:- 58

4. A PLACE FOR EVERYTHING 60

4.1. THE SIMULATOR 60 4.2. THE ROM MONITOR 60 4.3. THE EMULATOR 60 4.4. PROJECT COMPARISON 61 4.5. TARGET REQUIREMENTS: 61

4.5.1. Monitor: 61 4.5.2. Emulator: 61 4.5.3. Simulator: 61

5. COMPARISON OF BENEFITS 62

5.1. MONITOR ADVANTAGES 62 5.2. MONITOR DISADVANTAGES 62 5.3. SIMULATOR ADVANTAGES 62 5.4. SIMULATOR DISADVANTAGES 62 5.5. EMULATOR ADVANTAGES 63

www.phaedsys.org

5.6. EMULATOR DISADVANTAGES 63

page 5 of 81 19/01/2006

Page 6: Embedded MCU Debuggers

5.7. SUPER MONITOR 63 5.8. PERFORMANCE COMPROMISES 64 5.9. CRADLE TO GRAVE SUPPORT 64

6. POST PRODUCTION 65

7. ICE OF THE FUTURE 66

8. CONCLUSIONS 68

9. REFERENCES 70

10. STANDARDS 76

www.phaedsys.org page 6 of 81 19/01/2006

Page 7: Embedded MCU Debuggers

Micro-controller Debuggers - Their Place In The Micro-controller Application Development Process

1. Introduction Embedded systems are different to most "normal" computer systems. Usually they have no screen and keyboard, they have strange IO and peripherals working at their own time and pace not yours. More to the

point these strange peripherals often controlled external equipment many in safety critical, medical and transport systems. Some in systems like rockets and cruise missiles have to be tested fully before the equipment is used… you can't just "run it and see…"

www.phaedsys.org

Embedded systems use strange processors specially made for the

job. Most often the software for the system is developed on a work-station (more likely a PC these days) and "loaded" on to the target after it has been compiled and linked. Debugging requires either the target system to be simulated on the development platform or remote debugging of the target from another machine…. This caused many weird and wonderful schemes and tools to be developed over the years. This paper will look at the main methods that been developed and those that have survived through to current usage. Back in the good old days when microprocessor were new (the mid 1970's), microprocessor programs were developed in assembler and "blown" or burnt into EPROM’s such as 2708's, 2716's - remember those? There was no possible way of knowing what the program was actually doing, step by step, as it ran. Looking for bugs, ingenious methods such as twiddling port pins and flashing LED's were used to tell the programmer where the program was. It not how it had actually got there. With a lot of sweat and a lot of ingenuity, working programs were produced. The a few had ROM-based monitor programs whereby a array of seven segment LED’s and a hex keypad allowed assembler to be single stepped and simple execution breakpoints to be set. In some instances, a serial connection to a computer or terminal allowed more flexibility and interactive debugging via archaic and cryptic commands.

page 7 of 81 19/01/2006

Page 8: Embedded MCU Debuggers

Note this is to an 80 column mono screen. The only graphic visible would have been the manufacturers logo on the case. There were also some very rudimentary software simulators. Again, these were command line with strange command sequences on a monochrome character based screen. It was hardly real time, more a case of run to a point, stop, dump the registers, maybe a block of memory and see what state things were in. Rudimentary timing was possible by knowing the number of cycles each instruction took. The problem with simulators was they were usually hand made and not exactly portable. We are talking of a time before PC's and the omni presence of MS Windows, or inexpensive UNIX workstations. There was no common host platform or GUI environment. An elite few, in the largest [richest] companies, were "blessed" with the ultimate tool - the In-Circuit Emulator (ICE). However, such was the initial cost (around $20,000 in the early 1970's), limited features and subsequent unreliability of some of these early devices, that they often were ditched in favour of the more reliable monitors or logic analysers. The one-thing emulators did have in common with monitors (and later the logic analyser), was their strong assembler-orientation. High-level languages were for wimps, besides there was no direct and visible line from the HLL to the binary. This made the engineers of the day very nervous of compilers. Compilers, and their languages were also in their infancy so this scepticism was often well founded. This axiom that assembler is best because there is a direct one to one relationship between the code and the binary still pervades the industry today. Several parts of the industry mandate assembler in the mistaken belief that it is safer than using a high level language.

1.1. Basic methods When there was no test equipment one simply wrote the code, compiled it (sometimes linked it1) and burnt into an EPROM. There was no white box testing of the code: on or off the target. It was not possible to look inside the system as it ran. The “Burn and Pray” brigade worked on the assumption that if the correct outputs appeared during testing then the code must work therefore the program was "tested". As there is a strong relationship between assembler and the resultant binary image in ROM it is true to say that the developers knew reasonably well what was in the PROM. At least they thought they did…. The use of MACRO and optimising assemblers changed the relationship between what was written and what actually ended up in

www.phaedsys.org

1 When there was only one file there was no linking to do. Many old programs were one asm file.

page 8 of 81 19/01/2006

Page 9: Embedded MCU Debuggers

the ROM., though the list and map files were a great help to see where things actually were and most good assembler engineers knew how their tools worked. This is there time when a good EPROM programmer with editing capabilities was very useful. I have one that had the usual nixie tube display and the ability to drive a monitor (screen) for editing sessions. It was expected that professionals would patch the hex! Using C (or any other HLL) there is no direct relationship between the HLL source and the bytes in the ROM in the same way as there was with assembler. Besides compilers for HLL were relatively new as were the languages and their definitions. The other problem was that compilers were not always very good at producing the best assembler from the HLL. Thus, the programmer could not say for any certainty that he knows exactly what is in the ROM without an ICE or ROM-monitor True the map files helped a lot but this only goes so far and does not help with code execution. The ICE and Rom monitor use other tools and produce other files to ensure they do know where they are in the source. I have seen an interesting case where an optimising compiler noted that a value was written to a register and then to memory. The value was then immediately read back from memory into the register and compared with the value that was first written to the register. The compiler realised that the write to and subsequent read from memory, always being the same address, was "superfluous". It optimised them out saving both time and code space. It speeded up the memory test quite a bit. The memory test never failed but of course it never touched the memory that it was “testing”! Hence the C keyword: volatile. In the second edition of this paper I wrote, “Surprisingly in 1999 the practice of burn-and-pray still goes on. Though I hope that these days the code is, at least, tested with a software simulator first. I hope that the “Burn and Pray” technique will not continue into the 21st Century.” Unfortunately, there has been no miracle and sudden conversion to good engineering methods. The practice still goes on but I am happy to report that in 2002 there is more of a tendency to do things correctly. I am hoping that the amendments to the Manslaughter Act for Corporate Manslaughter will expedite this tendency! (See Quest 1 Embedded C Traps and Pitfalls)

www.phaedsys.org

The use of LED’s and toggling port pins is used like a rudimentary printf(“Hi! I am at line x \n”); It probably tells you where you are in a program. This technique assumes that the software is functioning to the point there the port pin is toggled. There is no guarantee that the target is in the state it is supposed to be in. Using this technique to

page 9 of 81 19/01/2006

Page 10: Embedded MCU Debuggers

test hardware is one thing but for software testing it is of dubious value. These techniques of port pins and LED’s are still in use today but generally for post-production error reporting. An example of this is the POST2 beeps produced by a PC during boot up or status LED’s on an ICE. However this is as part of a fully tested and working system to show certain known (usually hardware) errors. In the case of a PC it can indicate missing keyboard, disk, CMOS information etc. Usually it requires working [initialisation] software. It is rarely, I hope never, used as a main test method in SW development anymore. Using 7 Segment LED’s some additional information can be obtained. These techniques are hopefully now only used in programs to test new hardware rather than debugging the software. They are also used as part of some systems to give fault information to service engineers when faults develop in the field. Again these are usually hardware problems i.e. something is physically broken. The next stage was to have a terminal connected to a serial port on the target and printf statements top the serial port and the state of the target sent out. However, this requires hardware to support the debug port and debug code in the application source. This form of debug is, as mentioned still built in for field service technicians. It is becoming increasingly popular was FLASH memory permits easy field upgrades of system software without the need to disturb the hardware or in many cases it removes the need to even. open the equipment.

1.2. ROM-Monitors Engineers required more than toggled port pins and flashing LED's. They wanted to see what was going on inside the MCU. The problem with an embedded MCU is that not only is the CPU in the chip but, as can be seen from the diagram, many of the peripherals are also inside the chip. Many of the address, data and control lines never become visible at the edge of the chip. This means that, even with a logic analyser, it is not possible to have any view of the inner workings of the part. In a single chip design where no external memory or peripherals are used the part is the system.

www.phaedsys.org

2 Power On Self Test used on many systems E.G. PC's, alarm systems etc to give rudimentary debug information if the system fails to power up and self test correctly.

page 10 of 81 19/01/2006

Page 11: Embedded MCU Debuggers

Thus ROM-monitors were developed to give this internal view. In some cases they could only work with external memory in others they were part of the application. ROM monitors are a very simple and hopefully very small piece of code that permitted a user to have some control over the program under test whilst actually running on the target. The

monitor code is loaded in to the target. There were two types of monitor: A monitor that was part of the application and "compiled in" and those that were more like an operating system. They were loaded to the target and the application loaded on top of them.

www.phaedsys.org

When an application is loaded under control of the monitor the monitor runs it. In most cases single step or run to break point was possible. This has a couple of effects. Firstly the monitor code takes up space thus distorting the

memory map. Even the smallest ROM-monitor required several Kbytes of space. Secondly it takes time to run the monitor code therefore the program could never run in real time. The trouble with these early ROM monitors was they had few commands, a very simple (and often cryptic) interface via a serial link to a dumb terminal. They also required a working serial port on the target. The command range was usually very limited: single-step but only on 3 byte opocdes in the case of 8051, break points but not watch-points, display registers or memory, read memory. The user interfaces were very primitive often limited to responses printed to screen as a command (usually a letter) was entered. Memory could be displayed as a snapshot not a continually updating window. The connection to the host is usually via a slow serial link. Over they years monitor interfaces have improved. The major drawback, apart from not being "real-time" is that ROM monitors required the target to be largely correctly working and the insertion of a ROM monitor changes the memory map. ROM monitors also cost money over the cost of the actual software, as every board that you wanted a monitor on had to have an additional serial port.

page 11 of 81 19/01/2006

Page 12: Embedded MCU Debuggers

This means that the hardware used for testing with the monitor was often not the same as the production item. However ROM-monitors did permit vision inside the target and control of the software on the target. They were often seen as the “Poor Man’s ICE” as they were relatively inexpensive. Many programmers were able to write their own monitors. However, as noted, there was a cost in the hardware requirements or development systems would have to be different to the production boards which, of course brings it's own problems.

1.3. The CPU Simulator More recently another form of debugger has arisen, which fits, cost and performance wise, between emulators and conventional monitors. This is the CPU simulator sometimes called the instruction set simulator. Initially simulators were often a product of the compiler manufacturers. Some of the silicon vendors also produced simulators. These simulators were often made available at a (highly) subsidised cost before the real silicon became available in quantity to kick start development, and sales. It also created market for 3rd party tools, assuming the part started selling. Simulators seek to provide a complete simulation of the target CPU via pure software. The initial simulators traded functionality and interface for speed. In those days target systems were not much slower than the host development platforms. They usually worked in hex rather than HLL. Thus when presented with an executable object file, the behaviour of the program can be observed any errors identified. Of course, being a software simulation, execution does not proceed in real time and all IO signals must be generated by special routines designed to mimic peripherals or other extra-CPU devices. Simulators started to emerge when PC’s and workstations became powerful enough (and cheap enough) to run simulation SW at a reasonable speed. Though the term “reasonable speed” is rather subjective and what was reasonable then is not now. The other determining factor was a "standard" interface. CPM was one but the emergence of a single OS for the majority ie MS DOS (and later MS Windows) made it commercially sensible to produce debuggers at a "reasonable" cost. The interfaces were initially rather basic. Like the monitor the early simulators had basic and cryptic interfaces. Actually monitors and

www.phaedsys.org page 12 of 81 19/01/2006

Page 13: Embedded MCU Debuggers

simulators often shared the same user interface. The commands were also similar though without the actual constraints of the target hardware simulators were often more flexible than the monitors. They could use all the resources of the host machine.

The monitor shown here is for the 68 K {Clements]. IT is from his book on the 68K. Simulators were a far cry from the modern GUI based windowing simulators today. Whilst mentioning the fact the simulators were originally supplied by silicon manufacturers: This is a good test for a new piece of silicon. If the development tools available only come from the silicon manufacturer, beware! This is for several reasons: Firstly it could be that all the compiler, debugger and ICE vendors see no market for the chip. Thus the MCU may only be short lived, in a niche market or very different from other chips available. This is of course not a problem if it is your niche market. Secondly, if the tools are available, usually at low cost, from the silicon manufacturer. What incentive is there for 3rd party tools companies to get involved? This ties the developer to a, highly controlled, single source for tools which could disappear as soon as the silicon manufacturer decides to stop production. This is because they want to release the next line in the family and want all their users to “upgrade” to it and stop supporting the older version of the development tools. The third test is look to see if the tools are all software based. Hardware tools still cost more time and money to produce than software tools. Copies of software tools can easily be “run off” and are easy to modify which is very useful if the silicon is not 100% stable. www.phaedsys.org page 13 of 81 19/01/2006

Page 14: Embedded MCU Debuggers

Hardware based tools tend to appear after the new silicon is reasonably stable. If there are no hardware tools available look at the age of the part and find out which hardware tools companies will be supporting the part. If it is only M-Mouse & Co beware. For chips with a future there should be several mainstream tools vendors with tools. The only exception to this is on special niche markets. In this case the silicon companies tend to restrict the tools to one or two companies for reasons of commercial security. The smart-card and DECT markets fall into this category.

1.4. The First ICE Even in the early years of embedded micro-controller development, the ICE was the ultimate tool. Intel developed the first real ICE, the MDS-800, for the 8080 in 1975. However they were very expensive, in 1975 the MDS-800 was $20,000. In those days, as can be seen they were also bulky. The picture is just the ICE, nothing else! The MDS-800 has a screen, a keyboard and two 360-kilo byte, 8-inch floppy disks (that’s 20cm for the youngsters). It was heavier than the

average PC these days having an all-metal construction. The screen was a monochrome character based type. Also there is no mouse. Note it was expected in those days that the light pen would rule the world by now, not that the ICE had a light pen either.

www.phaedsys.org

In those days ICE in general were not always reliable and they were intrusive in

many situations. So they were “almost real time” (but a lot more real time than anything else). Their buffers were small 44 cycles. (Not 44 Kbytes!). This was partly due to the level of ICE technology and to what now seems the ridiculously high cost of RAM.

page 14 of 81 19/01/2006

Page 15: Embedded MCU Debuggers

The reliability, or the lack of it, often stemmed from the mechanical adaptors used to attaché the ICE to the CPU under test. Fragile multi pin connectors with large cables attached and in this cased shown above a substantial amount of electronics at the target end. The other problem was (still is actually) the electrical loading. The Ice itself can affect the electrical characteristics to the effect where it can actually cause the glitches that are virtually impossible to find. Further ICE were also not able to emulate the faster micros. In the early days a CPU that could be used for embedded work was the same as the one that would be used in the ICE. These days we have 500MHz to 1GHz machines that are debugging embedded systems that have also increased in speed from 4Mhz to speeds reaching nearly 50Mhz. The two major differences between monitors and emulators are that when an ICE stops at a break point the whole board stops and one is actually looking inside the MCU with the whole target suspended. With a monitor, the target and the monitor program are still running and the monitor does not actually look inside the MCU in the same way. To see the registers on an Ice you just see them as the ICE holds them on a monitor the monitor has to run code on the target to retrieve the values. Break points on an ICE can usually be placed anywhere whereas on many monitors and simulators there are often restrictions. For example with the 8051 family software break points can usually be placed only on a 3-byte opcode. However most decent Engineers could write their own monitors, many did, which meant that not only was a monitor available the engineer understood it and could modify it if required. In contrast the early ICE was expensive back magic that was not always reliable. The one thing that emulators had in common with monitors was their strong assembler orientation. There was, in the 1970’s no HLL that approached the current widespread use of C in the industry today. Thus the number of people using any one language (or a version of it) for embedded work was small. In any event, early HLL compilers could not produced code that was compact enough to fit in many embedded designs. Due to the initial cost and subsequent unreliability of some of these early ICE, they were often dropped in favour of ROM-monitors. This may seem like a backward step now but engineers like to get on with the task and not have to spend time wrestling with unreliable equipment. Simulators were not available and the monitor was the only other on-target option.

www.phaedsys.org page 15 of 81 19/01/2006

Page 16: Embedded MCU Debuggers

In the beginning logic analysers were also used as they had far better triggering facilities than the ICE. They did at least permit a view on the busses that was not intrusive. One drawback of the old ICE was the slow serial link. Where 9600 baud was the fastest rate the serial link could do, it could take a while to download the program code. Added to which it took some time to generate the symbol table. Any change in the code, crash on the Ice or the target often required a long "reload" cycle. Some manufacturers got round this by having high speed parallel kinks between the ICE and the host. These were usually proprietary and required special (expensive) parallel cards in the host. The spectre of the slow serial link and the high cost of the ICE still haunt the modern ICE. However most Ice use high-speed serial, USB and Ethernet. So the parallel connection has all but disappeared.

1.5. How does an ICE Work? This section has been added after an exchange of emails I had with an experienced test engineer who last used an ICE when the Intel MDS, shown previous section, was new! It appears that not everyone understands how ICE work or why they were comparatively expensive. This section will, I hope, explain some of the limitations of an ICE and why, as with all tools, understanding the philosophy behind them will assist in your use of them.

The in most instances the ICE replaces the target MCU on the board. It is possible with some processors EG the 8051 where the address and data buses are visible that it can leave the part on the board. However this is not the same as BDM or JTAG debuggers as will be seen later. In the good old days this was usually a 40-pin DIL package. However, as MCU got larger and more complex so did the connections to the target board.

Given that the ICE replaces the MCU it has to emulate the part and run programs. This resulted in some "interesting" electronics replacing the

www.phaedsys.org page 16 of 81 19/01/2006

Page 17: Embedded MCU Debuggers

MCU part on the board. The functional replacement was done in several ways. TO start with the ICE must contain something that replaces the target MCU. This has traditionally been a "bond out" part. Emulation memory resides in the ICE. In Hitex ICE it is high-speed dual ported memory fitted as standard. Other ICE offered it as an optional extra. Dual ported means the ICE can access it at the same time the target program is executing in the same locations. This RAM used to be very expensive. The reason for this is so that the ICE can work on the memory whilst the program is running. If the memory is not dual ported the memory can only be accessed when the program execution

is stopped, ie at a breakpoint

www.phaedsys.org

Normally the emulation memory can be code or data memory to mimic the internal memory of the target MCU. However, any memory that is external to the part that is used by the program is also mapped to emulation memory. Mapping is the term used to set up memory spaces in the emulation memory to replace the actual target memory. In many ICE the emulation memory also replaced memory mapped IO and

peripherals. This is usually done using a bond out part that actually contains the peripherals. Bond out parts are only made in small numbers, are very expensive and not usually released onto the general market. In some ICE, 8051 for example, there are many memory configurations so the user has to set them up. In other families the memory map is fixed and the emulation memory is hard wired.

page 17 of 81 19/01/2006

Page 18: Embedded MCU Debuggers

The code is run from the Emulation memory. It is loaded to the ICE (not blown into the EPROMS/Flash etc) Most ICE can load FROM the on target EPROM's to the emulation memory. This is usually done in final stages of test to confirm that the binary that is tested is the one on the target. Normally the code is compiled and downloaded to the ICE for execution. Data memory can also be mapped. This usually turns off the rd/wr signals. Where this is a problem the memory is "shadowed" so that the rd/wr lines work and the memory on the target is checked.

www.phaedsys.org page 18 of 81 19/01/2006

Page 19: Embedded MCU Debuggers

1.6. The Logic Analyser The Logic analyser is not really a software-debugging tool but in the early days the absence of good ICE pushed it to the fore. It is more of a multi-channel digital scope. The Logic analyser permitted a view of the data and address bus, and other lines in real time and was not intrusive. Well it did have an effect on the lines but this was minimal. It does not in itself change the timing of anything neither does it change the memory map of the source code. Like the ICE, the Logic analysers were large, heavy and expensive (but not as expensive as an ICE). They also took a lot of setting up.

One of the problems with the Logic Analyser was the way it connected to the target. Usually something like 30-50 flying leads. You only had to miscount and be out by one to cause all sorts of "interesting" results. Leads falling off the target were a common problem (also with some ICE). However, the logic analyser connections always seemed to be more reliable than the Ice connections. Maybe it was because the logic analyser tended to be used by hardware people and the

Ice by software people…. However, unlike ICE that tended to be for a single processor or family, the logic analyser could be used on any bus system. Thus one logic analyser would be used on several projects with different CPU types making the logic analyser more appealing to the managers and accountants as well as the Engineer. This was also their drawback. The logic analyser could not be used in the same way or as a substitute ICE when the target was a single chip MCU. IE All the memory was internal and there was no bus visible outside the chip.

www.phaedsys.org page 19 of 81 19/01/2006

Page 20: Embedded MCU Debuggers

The logic analyser could put probes wherever the user wanted. Whereas the ICE connections were specific to a particular pin on a specific MCU. Another advantage was that the analyser could run faster than most ICE which, in the early days made it a lot more flexible and attractive to management. It's lack of HLL support was not a problem as most work was done on assembler and most engineers spoke Hex anyway! Logic Analysers have improved dramatically over the years to the point where the logic analyser has almost become a reliable, less expensive, version of what an ICE used to be. Some of the top end logic analysers can disassemble the data on the bus and even do a high level language debug. Logic analysers are still used occasionally for hardware debugging. However the logic analysers are still very expensive to the point were they are in many cases more expensive than an ICE.

www.phaedsys.org

Some logic analysers have colour graphical screens are mouse driven and have floppy disk drives to store data and set up information. The one I have will also print to a colour printer screen dumps, trace information etc. The Logic Analyser also has competition from many ICE these days. Some ICE can trace not only in C source but also in a similar mode to the logic analyser. This can mean, that often, all that is needed for debugging is and ICE and digital scope. The logic analyser tends to be the tool of the hardware teams bringing up a new board and no longer in the software engineers toolbox. In some places the ICE has been seen in use rather than the logic analyser for bringing up hardware.

page 20 of 81 19/01/2006

Page 21: Embedded MCU Debuggers

2. Brave New World So much for the past, things have moved on and at an alarming rate. Things are now smaller and bigger at the same time. They are faster, cost a lot less (comparatively), have more useful features, better interfaces, more intuitive and are multimedia. Every thing is now Multi-Media. OK…. We have pretty colour screen shots in the interactive hypertext help files, more imaginative icons and the tool will use one of those annoying beeps (in stereo) at the appropriate time… what more do you want? The advances in embedded tools have been mainly due to the semi-conductor industry itself. The changes have been brought about by the vast increase in the number of transistors that can be packed on to a chip and the mass production techniques that have meant prices of chips have plummeted. Many more functions are now packed on to less expensive chips including Debugger Friendly Features. Basically faster better tools can be made for less cost. Also features that were not practically possible ten years ago are now on all but the most basic tools The other more visible effect of this chip revolution is far more powerful and inexpensive host systems. The ubiquitous PC with faster processors, huge hard disks, Giga bytes instead of kilobytes of memory (megabytes came and went last year!), a multiple window GUI rather than a single screen of text and all of it all running several orders of magnitude faster. Thus the tools user interfaces improved dramatically and the host capable to handling far more data from the ICE itself. Another side effect of a powerful host system is that there could be integration between development tools. The practical integrated development environment (IDE) was born. It is now possible to have the compiler system running (controlled by menus and mice) at the same time as the debugger (simulator, ROM-monitor or ICE) and “flick” between them. A cycle of edit, compile, link and debug on an ice is as fast as it takes to tell. Taking it to the extreme, in some cases it is possible to "run" the diagrams in CASE tools on the target hardware via an Ice or JTAG debugger. Having said how wonderful the world is now there are many developers still using tools via command line control and make files under DOS. I understand that there is also a C/PM group out there somewhere as well! The Unix developers will get annoyed here but the same is true for Unix (and Linux). Most developers tend to use X and a graphical front end than a terminal window for a lot more things. Though Unix

www.phaedsys.org page 21 of 81 19/01/2006

Page 22: Embedded MCU Debuggers

developers usually drop into a terminal faster than windows developers. They also tend to be far more comfortable with Make.

2.1. Bringing The ROM-Monitor Debugger Up To Date With the simulator and emulator are now able to debug the HLL (or at least C) and a lot more user friendly the ROM monitor started to look very old-fashioned in its adherence to assembler support. People were suggesting the death of the monitor. However as software can be "made" for the cost of duplicating the disk, monitors are now the cornerstone of most 8-bit 16-bit development kits where they are included "free". This gives students, hobby users and small companies that are looking at using a Micro for the first time some debug facilities. Adapting the monitor to HLL support has proved remarkably simple for tools makers using the standard debugging and symbol information files. These debug files are commonly used by simulator and ICE. In fact most monitors now work with simulator front ends and all that was required was some comparatively simple changes to give monitors a longer lease of life. Similarly the ICE and Simulator producers were also able to produce a target monitor, which can fool an emulator's or simulator's HLL debugger into thinking that real emulator hardware is present (with limitations). The flip side is that in some cases the simulator or monitor user interface can be used to drive an ICE! Therefore, the monitor-bound developer now has access to a subset of the sort of facilities previously only enjoyed by emulator users. This means that programs may be transmitted to the target for debugging and C source lines single stepped; execution

www.phaedsys.org page 22 of 81 19/01/2006

Page 23: Embedded MCU Debuggers

breakpoints set on code addresses, using source lines or symbols. The monitor user interfaces now a fully windowed affair, with full mouse control and pop-up menus. In some cases it is the same interface as the simulator or ICE. The one shown here is a Keil simulator. For testing purposes, command files may be constructed to allow repeatable test conditions to be applied to user program sections, traditionally, the simulator's forte. Often the same tests can be run on both the simulator and the monitor. Some monitor-based HLL debuggers even manage on-the-fly program variable retrieval to the host PC's screen, albeit with some loss of real time operation. However the ROM monitor is still not real time and still changes the memory map (a commercial 8051 monitor is about 4-5 kbytes). Some monitors now load into high memory (or low memory depending on architecture) reducing the effect they have on the application under test. The change in location means that the user code is compiled to the same locations it will be in on the final target. This is an improvement but still distorts the memory map in a less obvious way. In some cases these monitors require some special hardware memory mapping. The current use of monitors is in two main areas: Firstly, very simple, low cost projects where it is unlikely there will be hardware or time critical problems. Typically student projects or initial feasibility studies with a processor that has not been used before and there are no in house tools available. This assumes that there is memory available for the monitor and the hardware has a spare serial port. However it should be remembered that serial ports cost money and this hidden when looking at the “cheap” ROM monitor. The cost of designing in an additional serial port, the parts and production can be considerable. Either that or the development boards will be different to the production boards.

www.phaedsys.org

Secondly, initial stage of large undertakings where SW trials are made within a fully proven board, usually provided by the CPU manufacturer, or other 3rd party supplier. Later in the project, when the real target becomes available the monitor is discarded in favour of a full in-circuit emulator. Whilst the ICE can do the same job as the monitor in this case it is cost effective on large teams to give several team members a target board and a monitor. Also on a development with a new MCU the ICE may not be available initially. The problem still remains that break points may only be set in certain places e.g. on 3 byte opcodes in the case of the 8051. This restricts the use of the monitor somewhat. Currently almost all monitors have some sort of HLL capability, reflecting the widespread use of C on embedded projects.

page 23 of 81 19/01/2006

Page 24: Embedded MCU Debuggers

Now C is used even in most projects, the traditional assembler-only monitor has reached the end of the road. The increasing use of micro controllers with on chip FLASH or OTP rules out the monitor debugger. This is because the monitor needs the program to run from ram. The break point (and single stepping) is produced by putting in a jump to a monitor handling routine where the stop it to occur. In OTP, EPROM, ROM and most FLASH this is not possible. There are some single chip parts that get round this by offering a limited number of break point registers for special "compiled in" monitors but in single chip designs the monitor is effectively dead for all but initial development.

2.2. User Applications With Integral Debuggers There are three types of integral debuggers. One is a recent (Spring 2002) incantation of the traditional monitor for 8-bit single chip designs that can be used where the program space is in FLASH memory. This puts a traditional monitor that talks to one of the new breed of graphical monitor/simulator interfaces. It offers a sub set of the usual features but it opens up a new option to designs that would previously only have been able to run on an ICE if target debugging was required. The second form of monitor is again where the monitor's own object files can be linked with the final application code to produce a system with an integral debugger. The third type is a specific to an application or family of applications such as alarm systems or washing machines where system tests, software upload and system data can be run. Thus, armed only with a portable and a company standard debugger a field engineer can service, upgrade and test a system. The inclusion of a standard commercial monitor gives permanent debug capabilities. It also means that people using ASICs containing MCU cores can test the system and ship the chip containing the code wit the monitor. This is often required where the code as tested must be shipped. There must be sufficient ROM and RAM left over to support the monitor and usually a spare serial port, of course. Despite other interfaces (USB, CAN, Ethernet) being available serial remains the most popular for test ports to a system. Most manufacturers will only make the monitor source available at extra cost, and may even require a royalty for each system shipped

www.phaedsys.org page 24 of 81 19/01/2006

Page 25: Embedded MCU Debuggers

which contains the monitor. This has generally ruled out an integral commercial monitor for most applications and it rarely happens these days as an ICE would be less expensive except where is required to have a monitor in the production version. Also the increasing use of micro controllers with on chip FLASH or OTP often rules out the classic monitor debugger. Therefore the traditional monitor is not often found in shipped products. The alternative is for the development team to write their own diagnostic program that will take control of the system when it is externally put in to a debug or "Engineer" mode. This happens a lot with alarm systems, washing machines, modern cars and the like that need maintenance checks. This is not a true monitor and does not usually offer the same type of general debugging facilities but can be tailored specifically to the system.

2.3. Modern Simulators Simulators have come a long way in a few years, largely due to the growth in power of the PC and GUI operating systems. Simulators seek to provide a complete simulation of the target CPU via pure software. The program can be “run” and its behaviour be observed to a point. Of course, being a software simulation, execution does not proceed in real time and all IO signals must be generated by special routines designed to mimic peripherals or other extra-CPU devices. The other problem is that simulators with simulate to the specification of the part…. This is more than some silicon does. This can lead to differences between the simulated and the actual part. Further to this there can be revisions of parts. The other thing to bear in mind is that a simulator is software…. Software is notorious for having "interesting features". Simulators now offer high-level language debugging because they are normally bundled, or more usually tightly coupled, with HLL compilers. Most commercial simulators are able to simulate a large range of CPU’s in a family.

www.phaedsys.org page 25 of 81 19/01/2006

Page 26: Embedded MCU Debuggers

The simulator's role can span an entire project but its inability to cope with real time events and signals means that as a the major debugging tool, it is limited to programs with little IO. Also, if the program has to interact with other devices, whether on chip peripherals or external inputs, the complexities of accurately simulating them can be a problem. The real strength on the simulator is in being able to exercise whole or parts of software system, repeatedly with predefined signals, usually using scripts. Unit test of a source code module of function can be carried out under simulation. See QuEST 3 Advanced Embedded Software Testing Thus before committing a new function to the whole system, a standard test on a simulator should rule out serious crashes. Due the low cost of simulators and that they do not need external hardware they have become quite popular. Many compiler and ICE vendors now supply them whilst the traditional suppliers, the silicon manufacturers have tended to fade from the scene. More often the silicon vendors will work with a 3rd party tool vendor when simulators, compiler and ICE are needed for new chips. Thus most compiler suites

now offer a simulator as an option. Most are now tightly coupled with the compiler IDE.

www.phaedsys.org

The simulator can often easily be converted to run new versions of a part or interfaces to new peripherals. This makes development of software for a

new part possible even before the part has been put on silicon. This enables silicon vendors to have software available and create an interest for a new part. So if there is a simulator but no Ice available for a part you are thinking of using take care! Either the part is very new (unstable?) or no one sees a big enough market for it to warrant making tools for it. If is a niche market silicon vendors usually team up with a 3rd party tools vendor to produce the required tools.

page 26 of 81 19/01/2006

Page 27: Embedded MCU Debuggers

2.4. Modern In Circuit Emulators The modern emulator, like all electronics, has got smaller and faster with a lot more features for a lot less money. The ICE lost the built in screen, keyboard and floppy disks making them smaller, less expensive to manufacture and helped to increase their reliability. At the same time emulators are faster in raw processing speed, which means that they can support non-intrusive real time emulation at speeds only dreamed about a few years ago. This is because embedded targets, generally, have not increased in speed at the same rate the host CPU’s have.

We are still using, as a target, the 8Mhz 8051 that was around when the 80286 baesed PC running at at 16Mhz was king but now we have the 2.5Ghz (it was 1Ghz when I first wrote this 6 months ago!) Pentium 4 used in the host for debugging. The 8051 family which still make up a

major part of the embedded market still only tend to run up to 20Mhz. But recently speeds have been increasing up to 30Mhz. However, with internal clock doubling on some chips requires an ICE to emulate at 40Mhz. To 60Mhz . In the 16 and 32 bit markets speed has increased but again nothing like as fast as the speed increase in hosts and debugging tools.

At last the ICE has broken free of assembler and has HLL support, well mainly C and C++ anyway! Note, the problems in providing C++ (and OO) support are much greater

www.phaedsys.org page 27 of 81 19/01/2006

Page 28: Embedded MCU Debuggers

than providing C support. Also C is not a sub set of C++. That is a myth that has caused much trouble in recent years. There are a few ICE that do support other languages such as PL/M, Pascal ADA and surprisingly Modula 2! (From the November 93 Embedded Systems engineering magazine survey) These non-C languages are becoming less common as time rolls on. PL/M is an old language that is fading3 away whilst Pascal and Modula 2 never really gained critical mass before C swept all before it. ADA will of course continue whist the US Government requires it. However it should have enough users to stand on it’s own anyway at the moment. This is a purely commercial comment of supply and demand and I am not commenting on the technical suitability of any of the languages. ICE can now display the HLL with full symbol and type information using various formats that are used such as ELF, DWARF, OMF, etc,

which are common to a particular group. There are a few proprietary systems still in use but these are fading out. The compiler produces these debugging files. The ICE or simulator (and latterly the monitor) processes these to get the information in the form it needs. In the 8051 field OMF or extended OMF are used.

As can be seen by the screen shot the modern ICE can give not only the name of the variable but the physical location, size and type information. When thinking of using a compiler suite with a debugger check for compatibility and upgrades. Some inexpensive 8051 ICE still do not implement the extended OMF. Fortunately it is an extended OMF and a basic OMF tool should not crash if it gets and extended OMF file. It just does not use the extensions. On the PowerPC front I have seen a cased where a compiler vendor upgraded all the compilers at a their customers sight and mentioned in passing that they could now do

www.phaedsys.org

3 If anyone is still using PLM let me know at [email protected] I have some PL/M tools going spare.

page 28 of 81 19/01/2006

Page 29: Embedded MCU Debuggers

ELF2. However what did not become apparent until a few days later was that it no longer supported ELF1 There was no Elf1/2 switch it was Elf 2 or nothing. However the debuggers were still Elf one and could not read the Elf 2 files. The debugger vendor said they would "probably" have the ELF 2 version out in six months… Whilst mentioning the symbol display above and the user interface etc other improvements in the modern ICE are that dipswitches are no longer required. ICE these days are point and click…. Set up and configuration should all be via software (and FLASH RAM or EEPROM). This includes target clock speeds as well as memory, CPU type (within a family) and peripheral configuration. It is rare to find jumpers on Ice except occasionally at the personality pod. Because everything is now set up in software it is possible to save state and set up information. This means that debugging can be resumed after a break with certainty and speed. No more sitting around on Monday mornings trying to recall exactly what the settings were when you finally gave up late on Friday! As an aside most ICE can be driven by scripts (see QuEST 3 Advanced Embedded Software Testing) which means that scripts or macros can be used to speed up common tasks. This increase in everything coupled with an even faster drop in the cost of technology has unfortunately resulted some manufacturers using brute force to crack some problems. Rather than better engineering. Initially trace was 44 cycles now it can be 64K lines or more of information. This, as we shall show later is not always as useful as it seems. Another example of changing technology (that is linked to the trace problem indirectly) is break points. In the past the ICE would execute the line where the break point was and then stop. This required the engineer to place the break point on a suitable line before the point of interest. This was not always as easy as it sounds. For example:- char Func(unsigned char b, unsigned char y) { unsigned char x[ max_count]; unsigned char count = 0; unsigned char char a =1; unsigned char char c =2; unsigned char char z =0; while ( max_count > count)

www.phaedsys.org

{

page 29 of 81 19/01/2006

Page 30: Embedded MCU Debuggers

x[count] = y/((a*b)/(b+c)); a++; y++; c = a+(b/a); z=z +x[count]; count ++; } return(c + z); } To find the return value of z one needs to set a break point on the return but not execute it. Executing the return and then stopping will put one back into the calling function. This would cause a loss of visibility of the local variables in this function. Stopping on an instruction before the return is going to be inside the loop on the first iteration unless suitable triggers are provided. Neither option is satisfactory but many ICE still expect you to work this way. Another common type of line is: Var1 = var2 + get_result(var3); This makes it impossible to check the return value. The only option (other than putting a break point on a return in the function is to be able to drop into assembler. It this were the way the Func(char a, char y) in the last example was called it would be almose impossible to debug with the older type ICE.

www.phaedsys.org

As can be seen here the lower window has a breakpoint on the HLL function call that is (automatically) mirrored by a break point in the assembler window above(which also has the C interleaved) This permits the engineer to step through the assembler so the mechanics (and the values) of the return

value can be observed.

page 30 of 81 19/01/2006

Page 31: Embedded MCU Debuggers

With older ICE this just was not possible because there would either be NOP's or the break point would have been somewhere in the middle of the calling function.. To stop at but not execute a line requires a lot of work to provide a non-intrusive look ahead op-code decode logic. The side effect is a very good trigger system for the users and it also has a beneficial effect on the trace. NOTE Some ICE, without the “look ahead” opcode decoder, still insert NOP’s into the assembler before each C code instruction to “emulate” these features. This instruments the code and therefore it is not s true emulation. Now memory is comparatively inexpensive. Ok it goes up and down a bit but compared to the 1970's and early 80's it is unbelievably inexpensive. As mentioned previously many ICE have large trace buffers and advertise the fact suggesting that more is better. Well, depending how it is used it might be.

Some manufacturers took the time to look at what was actually needed in a trace buffer and put the work into making them more effective. It is far better to be able to trace 1k bytes at the right time than 8k bytes “around the problem” that someone has to wade through to find the problem.

www.phaedsys.org

To trace the "right" area requires a good set of triggers and holding the right data in the buffer.

The triggers and the ability to filter cannot be under stated. Good triggers can ensure that the trace only records the area that is needed and holds the correct data. The right data includes: Executed addresses (not loaded and discarded ones) Code labels and variable names External signals, ports etc Bus states, read/write/fetch and interrupt ack

page 31 of 81 19/01/2006

Page 32: Embedded MCU Debuggers

Some ICE use a technique of only recording the program branch points in the trace buffer, then dynamically recreating the source code when the trace is examined. This can, when coupled with filtering triggers for stopping and starting, and things like user selective ignoring of library calls make a very powerful trace system without requiring a large trace buffer. In fact this method (by rule of thumb on 8051 ICE) has the effect of making the trace appear three times it's physical size. So a 2K trace using the branching technique will probably hold, on average, as much as a 6K trace buffer. This also has the side effect that less data is require to be transmitted from the ICE to the debug software host (usually a PC) The trace should also be accessible “on the fly” whilst the system is running and searchable within the ICE (not in a text editor later). Of course most Ice these days can display the trace in a variety of formats such as raw, assembler, High-level Language lines, or as shown here "signal" mimicking the display on a logic analyser making debugging with a storage scope much easier. NOTE this does not replace the digital scope or logic analyser but makes it much easier to use the two together without having to do mental mind flips to use the two displays together. Some ICE need the larger trace buffers because they record all the assembly code not just the C and cannot be switched to ignore library

calls. In addition, they may have a line holding a NOP for every line of C as mentioned previously. Some ICE with all these good features mentioned have also been increasing their buffer sizes of late. This is for an entirely different reason to the brute force reasons. Some ICE vendors are working with high-end code analysis tools that permit the dynamic analysis of code and animation

www.phaedsys.org page 32 of 81 19/01/2006

Page 33: Embedded MCU Debuggers

of things like state-charts, Rhapsody from I-Logix is shown here, where by the animation is actually run on the target hardware via an ICE To work with these tools does require a trace large buffer but it is of little use without all the other mentioned features such as triggers and break before make breakpoints. In many cases to produce an ICE one needs a bond out or hooks chip that is only available under license from the original silicon manufacturer Hooks are explained later. To get round this modern technology has produced the FPGA. The FPGA is often used in inexpensive ICE to produce a system that on the surface appears to do as much as an expensive ICE. That is until you actually try to use it and discover that it does not quite match the real ICE in performance usually where you need it on the edges of the hard real time performance. Having said that "proper" ICE also use FPGAs and ASICs as well but not as a way of avoiding licensing the correct technology. One “entry level ICE” that did not have a hooks licence was emulating a part using a second processor and a monitor type system. Users were often not made aware of this drawback. It was not until they started to get stuck in to the debugging that they discovered the difference and had to buy a proper ICE…. They now have the "cheap" ICE on the shelf gathering dust. Not such a bargain. Remember you only get what you pay for. The new inexpensive technology has spawned many so-called “Universal” ICE that claim to cover several MCU families. Some claim to cover "all 8-bit" systems. If it was that simple to do properly the major ICE manufacturers would be rushing to the FPGA and producing multiple architecture ICE. It would be wonderful for production costs and profit margins but it hasn’t happened with any of the reputable vendors so far. The phrase “Jack of all trades and master of none” comes to mind. One only has to compare the Von Nueman architecture of the 68 series with the idiosyncrasies and Harvard architecture of the 8051 to see that there will have to be some compromises or some dual systems under the lid. If it were a dual system these ICE would cost more than a single family ICE. However, “Universal ICE” always seems to cost so much less than single-family systems. These inexpensive “Universal” ICE usually require a lot of “add on’s”, “add in’s” and upgrades to come close to matching the single family ICE. There is no such thing as a free lunch.

www.phaedsys.org

What has happened is that the cost of the hig- end universal ICE has come down and due to modern technology got a lot smaller. These ICE are modular in both hardware and firmware. So whilst no system is

page 33 of 81 19/01/2006

Page 34: Embedded MCU Debuggers

“universal” they have a range of generic and architecture specific modules that can be used in many combinations. At one time these ICE were housed in 19-inch racks and the modules were large PCB’s now the are considerably smaller, often a single ASIC chip. What is coming to the fore is the professional modular ICE. This is a professional standard Universal ICE. The inexpensive "universal" Ice are what they always have been, minimal hardware and as much as possible in software. The problem here is that they do "most" of what you need reasonably well. The trouble is that when you get to the point where you really need a proper ICE they tend to let you down. I recently (late 2001) had to demonstrate a good ICE to a customer on their equipment. (It was a Hitex MX51 for 8051) The ICE refused to correctly boot the system. The customer pointed out that the Hitex ICE was "no good" because they had a cheap ICE that worked perfectly…. A couple of hours later the answer became clear. The customers system had a deep-seated problem in start-up that their cheap ICE totally ignored. A couple of days later and the problem was solved. It also had a knock on effect on the problem I was called to see in the first place. The cheap ICE would not have found that problem either. One of the “it came and went” parts of the ICE history is the serial link problem. At one time, a serial link ran at 1200, 2400, 4800 and (if pushed) 9600 so people looked at parallel links from host to ICE. Some vendors even went as far as putting the ICE into the PC! Others insisted on putting parallel interfaces, most of them proprietary on to the systems. Now serial links run well at 115200. The parallel link has gone, replaced in high-end systems, with USB or an Ethernet link. The other interesting problem we have found is that many (most?) people debug on MS Windows platforms. From Win3* there has been less user control over the hardware. In NT it was positively discouraged giving rise to many problems as those with parallel port dongles discovered. Despite the fact that the parallel port grew up to become bi-directional it is still seen by windows as essentially a printer (output) port. This has meant that it has become a lot less suitable for controlling equipment. A pity.

www.phaedsys.org

The bottleneck was rarely the serial link anyway; it was the generation of the symbols. By generating the symbols at compile time before the program is loaded into the ICE a lot less data has to be passed up the serial like. The other method is rather like interpreted basic. Every time there is a breakpoint the symbol information required has to be processed and sent up from the ICE to the host. This slows down the screen update.

page 34 of 81 19/01/2006

Page 35: Embedded MCU Debuggers

As mentioned the USB port will (has?) taken over from serial for the lower end (8 & 16 bit) and Ethernet for the higher end (16 to 128 bit).

2.5. New ICE Features Emulators are now able to offer many new features apart from HLL support and vastly improved triggers and trace that were not dreamed of in the past. Two of the most important advances are code coverage and performance analysis. Some simulators also offer this but not on the real hardware or in real time. In the case of code coverage this is not to bad, though obviously it does put a query over interrupt sequences from external source hopefully the internal sources will be synchronised. However, you can never be certain of the interrupts in anything other than real time. For performance analysis it can only be a theoretical result. There is a separate paper " Advanced Embedded Software Testing QuEST 3" that looks at the implementation and uses of code coverage, performance and regression testing with an ICE.

2.5.1. Code Coverage

Code coverage is now an important part of testing and validation particularly in safety critical environments. Actually it always was

important now it is recognised as such.. In simple terms this test should result in documentary evidence that during a certain test, all instructions were executed and no malfunctions were observed. This gives a high level of confidence that

there are not any hidden bugs in un- executed code waiting for the end user! There are many software tools that now do code coverage but the problem for embedded systems is that many will instrument the code or not actually run on the target. Using an ICE you know the code has

www.phaedsys.org page 35 of 81 19/01/2006

Page 36: Embedded MCU Debuggers

been run on the hardware to real time. However it is not that simple…. See [Maric] Code coverage comes in several versions. Statement coverage, branch coverage and modified decision coverage. The basic version is : Statement coverage. This simply says that the line has or has not be run. In some cases it will also say if it has been partially covered. This will cover lines that contain more than one clause. This should be possible on any reasonable ICE as shown. Branch coverage is a little deeper. This is not usually available as an "tick box" option on an ICE. It is the prerogative of specialised tools It does coverage based on the number paths through the code. In the codes shown below statement coverage would show this as 100% coverage if each line had been covered. int a,b,c,d; void f(void) { if (a) b = 0; else b = 1; if (c) d = 0; else d = 1; return;

} However looking at the diagram below it can been seen that there are in fact four paths through the code. These would be : A = 0 and C = 0 Thus B = 0 and D = 0 A = 0 and C = 1 Thus B = 0 and D = 1 A = 1 and C = 0 Thus B = 1 and D = 0 A = 1 and C = 1 Thus B = 1 and D = 1 Statement coverage would be happy with the two lines below: A = 0 and C = 1 Thus B = 0 and D = 1

A = 1 and C = 0 Thus B = 1 and D = 0

www.phaedsys.org

Therefore all the lines of code would be covered but not all the paths.

page 36 of 81 19/01/2006

Page 37: Embedded MCU Debuggers

So whilst the ICE doing Statement coverage will give 100% it will not test branch coverage. At least not in it's standard "off the shelf" form. It is reasonably simple to make an ICE that has code coverage to do branch coverage but first we should look at the final level of coverage. Modified Decision Coverage. This takes branch coverage to the next level. This looks at not only the paths but the values of the decisions. In the previous example we had a simple true or false. int a,b,c,d; void f(void) { if (a < 0) b = 0; else b = 1; if (b < 3 && c <=a) d = 0; else d = 1; return; } In this case both statement and branch would give 100% coverage without ever testing "c <=a" when b !< 3. This is because C stops as soon as it gets a false. In this case it appears immaterial but it does highlight the point that 100% code coverage can mean different things to different people. As mentioned most ICE only do statement coverage as standard but it is possible to do both other versions of coverage with an ICE {Guest 3} and [Buchner/Tessy]. Most ICE have a script language that can be used to program the tool. Usually an ICE can also input variables into the program under test. Thus by careful analysis one can exercise the code to cover all branches and as it only takes a little more work to do decision coverage as well. This falls under the remit of the paper The Tessy article for the ESC II Brochure from, F Buchner at Hitex DE and Quest 3 on advanced testing. This is a topic unto itself that merits close inspection and study if you do any embedded testing.

www.phaedsys.org

page 37 of 81 19/01/2006

Page 38: Embedded MCU Debuggers

So far we have only looked at code coverage. There is also Data. It can be most illuminating to find that some data has or has not been read or written to. Data coverage monitors which data areas were accessed during a test and crucially allow the potentially dangerous READ before WRITE un-initialised data bug to be identified. To really validate embedded software it must be running in real-time on the target hardware. This is only possible with an ICE. Code coverage can be done on a simulator but it is not in the real target environment and a ROM monitor, unless shipped with the product changes the memory map (and is not real time anyway). The DTI’s Tickit guidelines, the MOD’s DefStan 00-55 and the motor industry’s MISRA guidelines advise that coverage tests be performed prior to release. On a strictly commercial note an ICE (that has many other uses) is usually less expensive than the better code coverage and analysis tools.

2.5.2. Performance Analysis

Performance analysis for many applications requires hard real-time not the pseudo real-time of simulators. Simulators can show performance in cycles and percentages and even milliseconds but this pre-supposes that everything is synchronised.

I once completely tested and debugged a smart card application. A smart card has five IO pins: Power, ground, clock IO and control. Everything is synchronised to the clock pulse. Thus in a simulation everything can happen on cycles in a know way.

www.phaedsys.org

Most embedded systems have external asynchronous i/o and internal interrupts. This

lack of synchronisation makes conclusive testing (and timing) in a simulator less reliable.

page 38 of 81 19/01/2006

Page 39: Embedded MCU Debuggers

Modern ICE usually all have profilers. This is a reasonable level of performance analysis; whilst better than a simulator these usually do not have the high level of accuracy of flexibility for precise timing. The analyser is usually based around the trace, triggers and conditional filtering to permit timing of precise areas of code either inclusive or exclusive to other calls and interrupts. The ability of the modern ICE to give, switch-able by the user, net function times. That is the time taken for the actual function to execute with and without the subroutines and library calls it makes. For example, you may wish to analyse a major function that contains a complex set of if statements. It would be pointless there were castled functions with varying execution times and these were included, of course, you would become proficient with the pocket calculator. By removing the called functions the function under test can be accurately viewed. The ability to time event pairs coupled with conditional guards and parameters permits extremely complex analysis with precise timing. It also means that long term tests can be run looking for those intermittent glitches that traditionally take teams weeks to find. As mentioned this is non-intrusive timing so that this is reality not simulation and exactly what can be expected in the field. This means that the ICE is now in a position to do full system tests rather than simple be used for debugging. Especially if scripts, macros (tool control language) are used to set up and run the tests and measurements.

www.phaedsys.org page 39 of 81 19/01/2006

Page 40: Embedded MCU Debuggers

3. New methods & N-wire debuggers One of the problems for the new ICE are the target packages. The 40pin DIL has all but gone only used on low budget projects. Most targets now use the surface mount chips with many legs (often over 200) on a much smaller pitch. This has resulted in a variety of complex, expensive and delicate adapters. They are not only mechanically delicate but need to have a great deal of care taken over their electrical characteristics. In an effort to standardise debugging techniques and get round the need for multiple complex cables and pods manufacturers devised various schemes. Generally, these systems use a fewer pins on the processor taken to a standard socket. Whilst, in most cases, they provide information from inside the CPU they are do not all quite provide all the real time information a full ICE pod and bond out chip would. In some cases the services offered are quite restricted. However other methods do give full ICE capability. A range of methods is described in the following pages. Many of these are specific to a particular family, vendor or group of parts.

3.1. ICE- Connect (Hi-Tex) On systems like the 8031 (and C166) where the systems have the address and data bus available through ports in some configurations

Hitex has a system called "ICE-Connect" system. The ICE-Connect system is particularly useful in high integrity systems where code coverage and testing must be shown with the final code and hardware as it will be used.

www.phaedsys.org page 40 of 81 19/01/2006

The system uses a simple 30 or 56 way header for 8 or 16 bit systems that permits a full ICE to take control of the target in the usual way but instead of costly

Page 41: Embedded MCU Debuggers

bond out chips the actual target processor is used. All that is required is for the target board to be tracked for the ICE Connect header that makes it cost effective even on production boards. Any board my then have the inexpensive ICE-Connect header fitted and the system tested. This means that new software may be tested on real systems using code coverage and performance analysis, which makes this system almost obligatory for some high integrity systems. On the down side the connector has to be designed in to the PCB from scratch. Also several lines (4 on the 8051) must be intercepted rather than monitored. This usually means that the lines are linked on the PCB and have to be cut if the ICE Connect socket is fitted. Then debugging and testing is finished the links and be put into the socket to make the board function as stand-alone as it had before the links were cut. This method is very cost effective for low volume production. Which is what the high integrity systems tend to be. In high volume production the cost of a few pennies for the tracking and holes would mount up. The advantage to this method is you get full ICE control with all the code coverage and timing analysis on the actual production board using the processor and components (and code) that will be shipped to the customer. This method is open to two families that are for all practical purposes not single chip designs.

3.2. Hooks For some families emulation of on-chip ROM based programs is possible. Traditionally this has been using expensive "bond out chips" These are parts with the additional internal busses brought out to the edge of the part. They are also expensive to produce.

www.phaedsys.org page 41 of 81 19/01/2006

In the 8051 world several chip manufacturers use the Hooks Emulation Concept developed by Metalink Corporation. For example Atmel WM and Philips use it in their RX2 parts, Siemens use it in their in their C500 and C166 Microcontroller

Page 42: Embedded MCU Debuggers

families to control the execution of MCUs and to gain information on the internal operation of the controllers. Each production chip has built-in logic for the support of the Enhanced Hooks Emulation Concept. Therefore, no expensive bond-out chips are necessary for emulation. This also ensures that emulation and production chips are identical. Remember when a part is changed a new bond out is also required The Hooks Technology is a port replacement method. Using the Infineon C500 (8051 range) as an example: It requires embedded logic in the C500 together with the Hooks interface unit: EH-IC to function similar to a bond-out chip. This simplifies the design and reduces costs of an ICE-system. ICE-systems using an EH-IC and a compatible C500 are able to emulate all operating modes of the different versions of the C500 micro controllers. This includes emulation of ROM, ROM with code rollover and ROM-less modes of operation. It is also able to operate in single step mode and to read the SFRs after a break.

On this 8051 example Port 0, port 2 and some of the control lines of the C500 based MCU are used by Enhanced Hooks Emulation Concept to control the operation of the device during emulation and to transfer information about the program execution and data transfer between the external emulation hardware and the

MCU. The other advantage is that the target MCU is exactly the same as the MCU in the ICE. There are two types of Hooks emulation, Basic and enhanced Standard Hooks Additional time multiplex on port P0 and P2 Modified Ale and Psen#

www.phaedsys.org

Enhanced Hooks

page 42 of 81 19/01/2006

Page 43: Embedded MCU Debuggers

No additional time slots but multiplexed port P2 Multiplexed and bidirectional EA# Modified Ale and Psen# Philips use Basic Hooks this has a maximum upper limit of about 25Mhz. However, it has been found that most ICE will only work reliably to 20-22Mhz. This is despite the fact that the parts are designed to run at 33Mhz. Atmel and Infineon use Enhanced Hooks which can be ICEed to over 66Mhz

3.3. Once Mode Intel developed their own system, which is not so commonly used. Industry prefers "open" systems these days. The ONCE (“on-circuit emulation”) mode facilitates testing and debugging of systems using the device without the device having to be removed from the circuit. The ONCE mode is invoked by: Pull ALE low while the device in in reset and PSEN is high; Hold ALE low as RST is deactivated. While the device is in the ONCE mode, the Port 0 pins go into a float state, and the other port pins and ALE and PSEN are weakly pulled high. The oscillator circuit remains active. While the device is in this mode, an emulator or test CPU can be used to drive the circuit. Normal operation is restored after a normal reset is applied.

www.phaedsys.org page 43 of 81 19/01/2006

Page 44: Embedded MCU Debuggers

3.4. BDM (Motorola) BDM or Background Debug Mode is used by Motorola for their generic on chip debug system. They can also be used for programming flash memory as well. There are several versions of BDM.

A 6-pin connector for HC12, Star8, Star12

A 10-pin connector for HC16, CPU32, PowerPC

www.phaedsys.org

Additional output signals are available on different devices or architectures (ECLK, mode pins, trace port) The transportation layers

page 44 of 81 19/01/2006

Page 45: Embedded MCU Debuggers

and the protocols used are NOT identical on different architectures. The following basic debug features are in common: Level 1: Run control Register and memory access during halted or running emulation Breakpoints An additional trace port may be available; e.g. on the PowerPC (Ready Port): Level 2: Branch trace Data trace Ownership trace BDM permits reading and writing of memory and registers, reading and writing blocks of memory. The stop and restart from the PC (which may be the original or modified value). The BDM takes no resources from the target, in normal running the target is in real time. The BDM only comes into play when activated where it freezes the whole MCU. BDM can not replace the ICE but does give the developer a dynamically loaded, but basic, ROM monitor that is always present. The power of BDM such as it is comes from the software running on the PC host. Whilst BDM is not all-powerful, it is free in the sense it takes no resources at all and is on the chip anyway. BDM adapters and host SW are comparatively inexpensive. As BDM cycle steals in some cases BDM can be intrusive but this is not usually a problem. With BDM, you can run the app in real-time and set breakpoints if you run it from RAM or you have hardware breakpoint registers in the chip, which not all have, that allow you to stop even when you execute from ROM. An in-circuit debugger is a little bit more hardware that in this case gives You run-control (start/stop) and memory access via BDM but in addition you connect to the other external signals of the MCU to allow for functionality like code breakpoints in (external) ROM, bus cycle trace and data breakpoints.

www.phaedsys.org

There is more to this, some vendors provide mappable memory at this stage as well. With a real in-circuit emulator you have all of the above functions but in addition you have emulation memory that can overlay or replace physical memory in your target hardware. Also, the emulator should always run even if there are problems in your target hardwar, allowing you to track down problems..

page 45 of 81 19/01/2006

Page 46: Embedded MCU Debuggers

www.phaedsys.org page 46 of 81 19/01/2006

Page 47: Embedded MCU Debuggers

3.5. JTAG Several North American and European electronics engineering companies formed JTAG from the Joint Test Action Group. JTAG and IEEE/ANSI standard 1149.1 (industry standard since 1990) are synonymous. JTAG was originally designed as a Boundary Scan test system for CPU, MCU and other parts

Boundary Scan was developed in the mid-1980s to solve physical access problems on PCBs caused by increasingly crowded assemblies due to novel packaging technologies This technique embeds test circuitry at chip level to form a complete board-level test protocol. With boundary-scan you can access even the most complex assemblies for testing, debugging and

on-board device programming and for diagnosing hardware problems.

Basic JTAG is a 4 or optionally 5 wire system (TDI, TDO, TCK, TMS and optional TRST) principally used on CPUs by Intel for boundary scan in testing. That is it will provide repeated snapshots of the pins on the edge of the CPU. There is also a Test Access Port (TAP) that permits a limited number of instructions. As information is clocked in and out serially, JTAG is not real time or a replacement for a full ICE connection, as it will not show activity on the internal busses.

A sixteen state FSM called the TAP Controller has to be implemented in the target silicon. The TAP Controller understands the basic Boundary

www.phaedsys.org page 47 of 81 19/01/2006

Page 48: Embedded MCU Debuggers

Scan instructions and generates internal control signals used by the test circuitry. A Boundary Scan Instruction Register and decode logic in addition to two mandatory Boundary Scan Data Registers, the Bypass Register and the Boundary Register must be present on the IC. The Instruction Register is controlled by the TAP and can act as a serial shift register between TDI and TDO; it selects the appropriate Data Register to be used, as per the current instruction. The standard states that the following three instructions must be supported: BYPASS Puts the one-bit Bypass Register between TDI and TDO SAMPLE / PRELOAD Connects the boundary register between the TDI and TDO pins without disconnecting the system logic from the IC's pins. EXTEST Connects the Boundary Register between the TDI and TDO pins, but disconnects the system logic from the IC's pins. By expanding registers and instructions, it’s possible to get access to a device specific on-chip debug logic The functionality of the on-chip debug logic is not covered by this standard only the ‘transportation layer’

The following debug features (or more) may be implemented: Level 1: www.phaedsys.org page 48 of 81 19/01/2006

Page 49: Embedded MCU Debuggers

run control Register and memory access Breakpoints Triggers Level 2: With an additional non-JTAG port or additional on-chip trace memory: trace information (branch, program, data, ownership trace)

However JTAG is used on the actual target and does not take up any space in memory map or any ports. The code under test can be the final code, as it will be shipped. Another advantage is that JTAG can be used in several devices on a board, thus permitting the developer a pseudo emulator in several chips from one connection.

JTAG Debuggers use comparatively simple hardware; it is the software that makes the difference. Most of the compiler suites for processors that have JTAG tend to have debugger software with hooks for JTAG hardware. Therefore unlike an ICE you would buy the JTAG debugger in two parts. The hardware from one vendor and the debugger software from another, usually the compiler vendor. This makes the JTAG debugger relatively inexpensive and very useful as 95% of the JTAG tools will also download and program memory. At the time of writing a full PPC ICE was 8 times more expensive than a JTAG debugger. JTAG/BDM is now finding it's way from it's traditional 32 bit market into the 16 and even the 8 bit market. For the 8051 family it is quite common in the area of soft cores. These are "soft" 8051 derivatives that are used in ASICS In this case there is usually no room to bring out lines for a traditional ICE.

3.6. Jtag Pre- history

www.phaedsys.org page 49 of 81 19/01/2006

Page 50: Embedded MCU Debuggers

The following was sent to me in early 2003. As far as I can tell it is genuine. The references all check out and the link to the US patent office is:- http://patft.uspt0.gov/netahtml/scrhnum.htm and search on 4,030,072 where the patent can be examined. Dear Mr. Hills I just encountered your article, 'Microcontroller Debuggers - Their place in the microcontroller application development process' and read it with interest. It is well written and quite informative. As one who goes back to the early days of microprocessors, you might be interested to note that, around 1973, I invented and implemented a very robust 'embedded' boundary scan technology for use in commercial multiprocesor mainframes that were used for timesharing and transaction processing with hundreds or thousands of concurrent remote users. That design, I am told, subsequently provided the conceptual foundation for JTAG and 1149. Although the component technology in 72 was mostly pre-Isi, the partitioning and signal access issues were timing and pin related just as now, so the problems we solved then were highly similar to those of present-day LSI. The original development work, which went from 1972 thru 1974, resulted in a family of production mainframes that were intended to be nearly unstoppable - because each of up to 36 processors (of several different types) could be partitioned off while the rest of the system continued to operate and then they could be run through remotely controlled automatic or single clock diagnostics that would insert states at key points, propagate them by discrete clocking, and then retrieve the resulting state data for outside analysis. Control was managed by a very custom and very real-time 32-bit microcomputer that I also designed, which had a small o/s permitting all single clocking and diagnostic operations to be accomplished over any of 3 rs-232 links to the world, or through a switch panel mounted locally. Blinking Lights were added because considered a necessity for marketing reasons, although they had little functional purpose on such "fast" computing systems. Since this was in the early part of the 74-Series ttl-era, the 6-processor "chips" each consumed a couple hundred square feet of 6-layer pcb's in a fridge-size rack, and the full system was 6 racks, not including peripherals.

www.phaedsys.org

My 'diagnostic controller' could bring the system to its knees in a matter of microseconds, since it had on-off control of system clocks to every physical module. A result of this was that it had to be ultra-reliable and 'unstoppable' by any and all external means. This meant all program memory was bipolar PROM, exotic items in those days...and very hot running. The only control of it was via the comms, which were accomplished the old-fashioned way by bit-banging in the

page 50 of 81 19/01/2006

Page 51: Embedded MCU Debuggers

main control loop. So the ultimate irony of it was that this super in-circuit diagnostic controller was an absolute nightmare to debug - mostly with a scope and bit diddling of the microcode bits on a bipolar prom simulator to make sync patterns for the very inferential debugging processall the way through a couple K of fairly intense microcode that ran the things described above plus a variety of housekeeping tasks like getting the larger system lit, shutting it down, and making entertaining patterns on the console. This work was done at Xerox Carp's Advanced Development Group in El Segundo, Calif. Further description can be found in U.S. patent #4,030,072 , which was finally issued about 1977 About 1975 I resigned from XRX and went to off to start a microcontroller Company but that's another story Regards, . Steuart Bjornsson So there is a piece of history from the original author of the patent. .

www.phaedsys.org page 51 of 81 19/01/2006

Page 52: Embedded MCU Debuggers

3.7. AMDeBug AMD have announced a custom on chip system for their E86 family that includes an on chip trace-buffer and is able to work with a serial (JTAG connector) or parallel port with trace buffer. The parallel version does require a bond out version of the MCU An on-chip Software Development Port supporting: Processor bring-up from power reset (kernel mode debugging). Application software debugging and OS communication. Performance profiling for application software. Reduced hardware connection costs via standard JTAG pin-out used on all processors. An on-chip trace buffer (or trace cache), which records the address of the most recently executed instructions. Eliminates the need for complex high frequency bond-out versions.

Trace Records

Serial Software Debug

Trace Control

9

2 Parallel Software Debug

AmX86 Core

Software Debug Interface

The main reasons AMD see for doing this system are:- There is no need to have a software monitor installed in the target system. This reduces the cost and simplifies initial system bring-up. No need to reserve ROM/RAM space for monitor operation. Simple connection makes x86/AMDebug the easy to use design solution. No target resources are required to support host-target communication.

www.phaedsys.org page 52 of 81 19/01/2006

Page 53: Embedded MCU Debuggers

No intrusion into normal processor performance but gives complete control and visibility of processor operation. The system supports kernel-mode and application-mode debugging. AMDebug based debuggers can be operating system knowledgeable. The on-chip program Trace Cache is provided at no extra tool cost it is this that gives the system a lift because as previously mentioned ICE development generally lags processor introduction and ICE is not available for all package formats. The Trace Cache can also perform performance profiling of operating system or other system information. The Trace Cache information can be examined by the target processor itself or the host platform via the JTAG port. Via the JTAG connection Trace Cache information can be saved or sent to other tools or developers. Much of the information here was stolen from an AMD presentation. However it shows that limited as JTAG is it is often available long before a full ICE and is now starting to provide most of the essential features at a much lower cost.

www.phaedsys.org page 53 of 81 19/01/2006

Page 54: Embedded MCU Debuggers

3.8. On-Chip Debug (OCDS)-Infineon The Siemens Tricore is equipped with additional debug support. This debug support can be accessed via the JTAG interface using a simple connector cable. Tricore architecture supplies two levels of On-Chip Debug Support (OCDS) to permit tools that are more powerful to be implemented. Both OCDS levels go far beyond the usual facilities offered via JTAG connectors. The integrated debug support does not require any target resources (i.e. communication interface and memory) and the monitor control software is not disturbed by errors within the application software. The standard specifies a 16-pin connector for level 1. An additional 40-pin connector is specified to get level 2 functionality. OCDS can be used for on-board device programming, too

3.8.1. OCDS Level-1

Level 1: run control device identification register access during halted emulation memory access during halted or running emulation breakpoints triggers A detected break event may still stop program execution but it is optionally possible to produce just a trigger signal for external test hardware, with software execution uninterrupted. In addition, programming the priority level for the debug interrupt allows the continued servicing of higher priority interrupts in time-critical program sections whilst lower priority code is halted for debugging. Another OCDS benefit is that read/write accesses are possible on the Tricore internal busses across the entire address space (including internal registers) during program execution with only a very slight real-time violation. Thus, any program variable can be accessed "on-the-fly".

3.8.2. OCDS Level-2

Level 2: compressed instruction trace

www.phaedsys.org page 54 of 81 19/01/2006

Page 55: Embedded MCU Debuggers

PCP trace (TriCore) In order to find the hard-to-detect errors, this level provides additional support via a dedicated emulation chip that has additional signals bonded out to enable the tracing of program execution. This debug support level requires a direct connection to the target controller and needs additional hardware to record the program flow information.

3.9. Nexus IEEE-ISTO 5001™ - 1999, The Nexus 5001™ Forum Standard for a Global Embedded Processor Debug Interface is an open industry standard that provides a general-purpose interface for the software development and debug of embedded processors. NEXUS can be used for on-board device programming, too. This is intending to be the next level up from JTAG. (IEEE 1149.1) Formed in April 1998, in September 1999, the group chose the IEEE Industry Standards and Technology Organization (IEEE-ISTO) for its unique forum Aim: to advance the development, marketing, validation, and implementation in support of IEEE-ISTO 5001™-1999 Members of the Nexus 5001™ Forum come from the following operational areas: Embedded processor suppliers Independent tools providers Semiconductor and hardware development tool providers Software tool providers Companies doing designs on embedded processors Hitex is a member of the Nexus 5001™ Forum For more information on the Nexus 5001™ Forum please refer to: http://www.ieee-isto.org/Nexus5001/

www.phaedsys.org page 55 of 81 19/01/2006

Page 56: Embedded MCU Debuggers

The standard requires minimum 5 dedicated pins Additional standardized or user-defined pins may be added. The first three connectors are specified (see below). Four Compliance classes are defined - the following basic debug features are available: Class 1: Run control Device identification Register and memory access during halted emulation Breakpoints Watch points Watch point message Class 2: Program trace Ownership trace Optional: port replacement Class 3: Data trace (only for write) Read / write access to memory under real-time Optional: data trace (read and write) Optional: data acquisition Class 4: Memory substitution via NEXUS port (for reset or exceptions)

www.phaedsys.org page 56 of 81 19/01/2006

Page 57: Embedded MCU Debuggers

Start ownership, program or data trace upon watchpoint occurrence Optional: start memory substitution upon watchpoint occurrence or upon program access of specific address

www.phaedsys.org page 57 of 81 19/01/2006

Page 58: Embedded MCU Debuggers

3.10. Summary of N-Wire systems:- The problem with these N-wire systems is that they tend to be chip family or vendor dependant. However as there is usually little similarity inside an ICE, Simulator or ROM-monitor for different architectures this is not usually a problem. For example one would not seriously expect to use tools for a 68k on an i86 system. Development teams tend to specialize in one or two MCU types. The “Purchasing Departments Stone“ that finds the one universal ICE is still (and likely to be so for the foreseeable future) a myth. The closest you can get to it is the JTag system where the hardware is the same and "only" the software is different. Nexus is the follow on from JTAG and will hopefully enable some more standardisation. It will have to happen as chips get more complex and more of the debug functionality is placed on the chip.

www.phaedsys.org page 58 of 81 19/01/2006

Page 59: Embedded MCU Debuggers

www.phaedsys.org page 59 of 81 19/01/2006

Page 60: Embedded MCU Debuggers

4. A Place for Everything In a large project, there is a need for all three types of debugger. Actually there is a place for all three types in any project but sometimes the cost prohibits. In the embedded world where safety, time and resources (in the target as well as financial) are hard taskmasters, it is the ICE that is the most useful and cost effective item.

4.1. The simulator Used to test new sections of code during construction and remove trivial errors. Unit test of functions or modules in test harnesses. The added advantage is that as no real target hardware is required simulators permit initial development before any hardware, including the target MCU, is available. Additionally simulators tend to be about a quarter the cost of an ICE they can be widely used in large teams to unit test parts of the SW. Thus, testing or prototype demonstrations can be done anywhere that you can take a laptop.

4.2. The ROM monitor To test initial versions of the program when using real signals and provide back up to the emulator. The ROM monitor can be used to control parts of a test program on some known good development hardware. Usually from the chip manufacturers or their partners, to try out peripherals, test algorithms on the real silicon if not the final target. The ROM monitor is used like the simulator to test items but particularly the I/O.

4.3. The emulator To integrate the real project hardware and software, remove time and hardware dependant bugs. Asses run time performance, test code in its target environment. Finally, prove 100% code coverage when running in the real system. Emulator-based testing, as required by Tickit, is an important phase, which is covered in another paper. Additionally the ICE should be able to run in stand alone mode i.e. without a target, just a pod connected. This gives it the same versatility as the simulator in that it does not need a real target system, although it will need a real MCU in the ICE.

www.phaedsys.org page 60 of 81 19/01/2006

Page 61: Embedded MCU Debuggers

4.4. Project Comparison Smaller undertakings may appear to be able to make do with just a simulator and ROM monitor, possibly bringing in a full emulator for solving difficult problems. Indeed, many simulator packages include monitor-based versions of the simulator. In these projects and others where real time performance is not critical, the debugger provided by the simulator may be sufficient. For larger undertakings or those where there is a strong real time element, the debugger originating from the emulator may well the best choice. How then do the overall capabilities of a monitor-based debugger compare with it's emulator and simulator counterparts? By taking the ubiquitous 8051 families as an example, here are some important factors: HiTOP51-TELEMON51 Monitor DScope51+ - MON51 Monitor

4.5. Target Requirements: ROMless 8051 variants only, with at least 2K available external RAM and 8K of spare ROM space - The monitor requires some CPU memory space

4.5.1. Monitor:

Lower 32 KB to be mapped to a RAM, with /OE connected to /RD and /PSEN ANDed. This is so that the monitor can download code from the host for execution and also so that breakpoints can be implemented by inserting LJMP instructions into the program. This permits the monitor to regain control after hitting a breakpoint, previously set by the user Upper 32KB must contain external RAM of 8K for the exclusive use of the monitor. Some means of remapping monitor EPROM to 8000H after power-on, probably utilising a special memory decode PAL or discrete logic. Sole access to CPU (or other) serial port device.

4.5.2. Emulator:

Target Requirements: None

4.5.3. Simulator:

Target requirements: None - no link with hardware

www.phaedsys.org page 61 of 81 19/01/2006

Page 62: Embedded MCU Debuggers

5. Comparison Of Benefits

5.1. Monitor Advantages Low cost and may be embedded in final application to allow field debugging. May be an adequate substitute for a second emulator on larger projects.

5.2. Monitor Disadvantages Limits user code to 32KB and data to 26K. Requires fully debugged, working hardware such as a target board . Uses CPU serial port for comms to PC, so preventing access by application program. Requires ROM space, which is unlikely to be available on a single chip 8051 application. Also needs external RAM. Does not allow read/write triggers to be set. Needs special single plane memory configuration which does not suit

widely used C compilers such as Keil and IAR. Has no provision for tracing (recording) program execution Steals some CPU time due to internal housekeeping activities. Can be crashed by errant programs - a hardware reset button is essential!

5.3. Simulator Advantages Moderate cost no resources stolen from “simulated CPU”. The full address range is available. Requires less setting up. Configuration or hardware debugging to function can be made to provide input. Signals that would be very hard to produce in real hardware. Abnormal conditions tests. Can support any CPU variant. An integral terminal can be simulated, connected to CPU serial port. Cannot crash or hang up…. Not strictly true. Some can be had with monitor and emulator versions.

5.4. Simulator Disadvantages As CPU is simulated, simulation may deviate from real CPU behaviour. No way of testing response in real situations. No way of producing real signals for peripherals. Not real time operation. Effort required to produce simulation of even simple external signals

can be considerable can be slow to execute code - timeouts which take seconds on real silicon suddenly take several minutes.

www.phaedsys.org page 62 of 81 19/01/2006

Page 63: Embedded MCU Debuggers

5.5. Emulator Advantages Will work with faulty or incomplete hardware. Takes no resources from target system of any sort. Can run stand alone with no target HW. Allows triggering on READ/WRITE/FETCH . Can combine externally sampled signals with software events. Can distinguish between executed and discarded opcodes. Allows tracing of program execution in real time. Can offer performance analysis and coverage functions Will work with single chip applications. Does not influence real time operation. Can allow events leading to complete target crash to be traced.

5.6. Emulator Disadvantages Perceived cost. Initial outlay for an ICE is apparently a lot though less

than half what they used to cost. However, in most cases, a relatively inexpensive “cable” is all that is required to change target type.

5.7. Super Monitor The emergence of the “Super-monitor” is part of a general blurring of the divisions between the various tools used in microprocessor debugging. A number of compiler manufacturers produce software-based CPU simulators that may be supplied in simulator, monitor and emulator varieties. Examples are the Keil DScope, IAR CSpy, Metrowerks Code Warrior, all of which started life as software simulators. With the co-operation of emulator manufacturers, these are now able to make use of the trigger and trace hardware provided by the emulator hardware. The reverse is also true where emulator manufacturers are turning ICE front ends in to monitor front ends and simulators. Hitex Hi-Top for example. In some cases, emulator manufacturers seem to be abandoning their own debuggers in favour of off-the-shelf packages from compiler vendors! The stand alone ICE can be used like a simulator but has the advantages of being real time and using real silicon. It is more expensive though and has to have the real silicon in there somewhere.

www.phaedsys.org page 63 of 81 19/01/2006

Page 64: Embedded MCU Debuggers

5.8. Performance Compromises All this sounds like the answer to an engineer's prayer. Unfortunately, the real situation is somewhat different. Having the debugger's functionality imposed on an emulator design can be very limiting - no two emulator designs are the same! A good example is that DScope51 and C-Spy do not have any provision for accessing the emulator's trace buffer on the fly, nor can they show data values held in the trace buffer either. These features being only on the most powerful emulators, the simulator-derived debugger does not support them. Also, if the emulator's trigger capabilities are more powerful than the norm; they will simply not be accessible. With the emulation hardware so severely hampered, the worth of having it at all must be questioned! This is why Hitex has introduced the HISIM51 simulator debugger which is unique in being derived from an emulator interface, namely HITOP, as used on the MX/AX 51 and 166 units. Hitop is also the same interface that is used on the Hitex monitor.

5.9. Cradle To Grave Support In theory then, the engineer can start his development with a simulation debugger, running purely on a PC. When the need arises, a target board can be procured which, using a monitor allows the code to be run using a real silicon CPU, communicating with real world signals. When the proper hardware is ready and real time elements start to become important, the target/monitor board is moved back in favour of a full in-circuit emulator. Finally, post-release, the monitor is linked into the application code to confer field testability. Thus from cradle to grave” the same debugger software could be used.

www.phaedsys.org page 64 of 81 19/01/2006

Page 65: Embedded MCU Debuggers

6. Post Production Postproduction faultfinding can be done by embedding a monitor, ICE-Connect, JTAG or BDM into a product. Embedded equipment whilst being every different to everything else and itself does have a couple of common characteristics. It is usually not easily accessible and it does not usually have a screen and keypad that is not dedicated to a particular task. So when it is out in the field it is not that easy to test. For example, a small telecomms-switching unit “somewhere in the Hebrides” that looks after a major Trans Atlantic link. The network can see that there is a problem but how do you find it. A drive from a warm network centre in Glasgow on a wet winter Friday night to fix a box on the Western Isles is not any-ones idea of fun. The answer is to embedded a ROM-Monitor or if the cost of the equipment warrants it, a BDM ICE. They can be accessed remotely by a telephone line and modem. If the equipment can be physically reached the options multiply. A simple equipment 7 segment displays giving a fault number is common. An Engineers Serial port on the board can give you a terminal window, bespoke test software in a laptop or a ROM-Monitor using the familiar Simulator/Monitor/ICE software. There are some other options depending on the cost and complexity of the equipment and of course the cost of downtime. Simple connectors such as JTAG and ICE-Connect can give access to ICE capability to all production units at minimal cost. In the case of the ICE-Connect is it simply the laying out of the PCB and two rows of header pins. Many 3rd party development boards are designed with these connectors so that and production unit can be ICE’ed with no further work or removing the CPU. With the small size of modern ICE and the low cost of lap to PC’s what was a highly expensive lab tool is now a practical field Engineers tool. If you think ICE are expensive for field Engineers have a look at their expenses! The obvious advantages of using a JTAG debugger is that the same tools can be used in development and in the field but…. JTAG does not have anything like the same features and specification of a full ICE. The bugs that are likely to have escaped development into the field are more likely not to the found by a debugger (as opposed to an ICE) .

www.phaedsys.org page 65 of 81 19/01/2006

Page 66: Embedded MCU Debuggers

7. ICE of the future The ICE has come a long way in a short time. Where to next? The on chip ICE? That is some way off eventually JTAG, BDM, OCDS, Nexus

and ICE-Connect will be replaced with a fast, multi-channel, fibre-optic link from target MCU to ICE. The ICE itself on a PCMCIA card… That was two years ago I write that… Now I expect a fibre optic line to a small hardware module that convertes to USB as even PCMCIA disappears…

In the mean time the single family ice will disappear and the mythical Universal ICE will reappear as the modular ICE as the ability to put more electronics on to smaller chips increases at the same rate costs fall. The cost of ASICs is falling in real terms to permit ever higher integration. Already the ICE use Ethernet and USB rather than serial and parallel connections to the host. These changes will be assisted by an increase with their family specific on-chip debug. However due to on-chip debug only one ICE and lead

www.phaedsys.org page 66 of 81 19/01/2006

run Control (Debug)

Tanto Base

Tanto PL

Port Trace

(Verification)

Tanto PT

Tanto Base

Tanto PL

Bus Trace (Analysis and

Optimisation)

Tanto BT

Tanto Base

Tanto BL

CPU

Embedded System

Debug Port Trace Port (Level 1 OCDS Level 2)

Page 67: Embedded MCU Debuggers

might be needed per family with the rest being universal modules. The trouble the ICE vendors will have is that their highly sophisticated and miniaturised modular ICE will start to physically look like the cheap "universal" ICE of the 90's they warned against. The low end Universal ICE should disappear as the cost of a "proper" ICE falls. The ICE will start to be used with other tools such as unit test tools like Tessy, CASE tools like Rhapsody and system test tools like LabView where the ICE is driven remotely by it's script language. The ICE will change it's role from pure debug to test, both unit and system. Full ICE can also be used as prototype hardware. This has been done in the 8051 world where the ICE has been used to drive a prototype ASIC before the production ASIC was made.

www.phaedsys.org page 67 of 81 19/01/2006

Page 68: Embedded MCU Debuggers

8. Conclusions In Circuit Emulators are not as expensive as folklore suggests and like a lot of things that have a high up front cost, work out remarkably good value in the long run. For example the MDS-800 was £12,500 in 1975 whereas a top of the range Hitex 8051 ICE now costs half that and start at about a quarter the cost. The cheap pseudo and Universal ICE’s do cost less but you get what you pay for. Allowing for inflation the REAL cost of an ICE has plummeted as their capability, usability and reliability has soared. Ice, with their code coverage and timing abilities are now are moving into the testing area ICE pay for themselves rapidly as they find faults that no other tool would find quickly. The ability to have the same front end SW for the Simulator, monitor and ICE is a great help to developers. With the increased use of faster single chip solutions and the requirements for professionally developed software, the old kitchen table methods will die out. Already most of the standards call for code and system verification that can only de done with an ICE. If we are still here on the first of January 2000 I think there will be more emphasis on validating embedded software on all systems not just those that are currently perceived as safety critical. Well I wrote that in 1999 and in 2002 we are still here. Yes, I am seeing more emphasis in validating and testing embedded code more rigorously. In 2002 I presented a paper (Quest 3) on that topic. As mentioned before modern technology has been used in some cases to mask a problem using software or inexpensive technology to superficially mimic the real thing. So chose your ICE or debugger carefully.

It is well worth buying good quality tools.

and

The ART in Embedded Engineering comes through

Engineering discipline.

www.phaedsys.org page 68 of 81 19/01/2006

Page 69: Embedded MCU Debuggers

www.phaedsys.org page 69 of 81 19/01/2006

Page 70: Embedded MCU Debuggers

9. References This is the full set of references used across the whole QuEST series. Not all the references are referred to in all of the QuEST papers. All of these books are in the authors own library and most have been reviewed for the ACCU. The reviews for these books and about 3000 others are on http://www.accu.org/bookreviews/public/ Andrews & Ince Practical Formal Methods with VDM, McGraw-Hill, 1991, ISBN 0--7-707214-6 Ball , Stuart. Debugging Embedded Microprocessor Systems, Newnes, 1998, ISBN 0-7506-9990-6 Ball , Stuart. Embedded Microprocessor Systems: Real world design 2nd Ed, Newnes, 2000, ISBN 0-7506-7234-X Ball , Stuart. Analog Interfacing to Embedded Microprocessors Real world design, Newnes, 2001, ISBN 0-7506-7339-7 Baumgartner J Emulation Techniques, Hitex De internal paper, may 2001 Barr, Michael. Programming Embedded Systems in C and C++. O'Rilly, 1999, ISBN1-56592-354-5 Beach, M. Hitex C51 Primer 3rd Ed, Hitex UK, 1995, Beach, M. Hitex C51 Primer 3 Ed, Hitex UK, 1995, rd http://www.hitex.co.uk (Draft 3.5 is on http://quest.phaedsys.org/) Beach M, Embedding Software Quality Part 1, Hitex UK Available from http://www.hitex.co.uk Berger, Arnold. Embedded Systems Design: Anintroduction to Processes, Tools and Techniques. CMP Books, 2002, ISBN 1-57820-073-3 Black, Rex. Managing the Testing Process (2nd ed), Wiley, 2002, ISBN 0-471-22398-0 Bramer Brian & Susan, C for Engineers 2nd Ed, Arnold, 1997, ISBN 0-340-67769-4 Bramer Brian & Susan C++ for Engineers, Arnold, 1996 ISBN0-340-64584-9

www.phaedsys.org page 70 of 81 19/01/2006

Page 71: Embedded MCU Debuggers

Brooks, Fred. The Mythical Man Month: Essays On Software Engineering, Anniversary Edition. Addison Wesley, 1995 ISBN 0-201-83595-9 Brown John, Embedded Systems Programming In C and Assembley, VNR, 1994, ISBN 0-442-01817-7 Buchner F Embedding Software Quality Part 1, Hitex DE Available from www.Hitex.co.uk Buchner F The Classification Tree Method, Internal paper: Hitex DE, 2002 Buchner F The Tessy article for the ESC II Brochure Hitex DE, 2002 Burden, Paul. Perilous Promotions and Crazy Conversions in C, PR Ltd, MISRA-C Conference 2002. http://www.programmingreasearch.com/ Burns & Wellings Real-Time Systems and Their Programming Languages, Addison Wesley, 1989, ISBN 0-201-17529-0 Chen Poon & Tse, Classification-tree restructuring methodologies: a new perspective IEE Procedings Software, Vol 149 no 2 April 2002 pp 65-74 Clements Alan, 68000 Family Assembly Language Pub PWS 1994 Coalman et al, Object-Orientated Development: The Fusion Method, Prentice-Hall, 1994, ISBN0-13-101040-9 Computer Weekly RAF JUSTICE :How the Royal Air Force blamed two dead pilots and covered up problems with the Chinook’s computer system FADEC Computer Weekly 1997 Cooling J, Real-Time Software Systems ITC Press 1997 ISBN 1-85032-274-0 Cooling J. Software Design for Real time Systems ITC Press 1991 1-85032-279-1 COX B, Software ICs and Objective C, Interactive Programming Environments, McGraw Hill, 1984 Dasgupta, Subrata. Computer Architecture: A Modern Synthesis: Volume 1 Foundations, Wiley, 1989 , ISBN 0-471-61277-4 Dasgupta, Subrata. Computer Architecture: A Modern Synthesis: Volume 2 Advanced Topics, Wiley, 1989 , ISBN 0-471-61276-6

www.phaedsys.org

page 71 of 81 19/01/2006

Page 72: Embedded MCU Debuggers

Defenbaugh & Smedley, C through Design, Franklin, Beedle & Associates, 1988, ISBN0-938661-10-8 Deitel, Harvey, Operating Systems, 2nd Ed Addison Wesley, 1990, ISBN 0-201-50939-3 Deitel H & P, C: How to Program, Prentice Hall, 1994, ISBN 0-13-288333-3 Deitel H & P, C++: How to Program, Prentice Hall, 1994, ISBN 0-13-288334-0 Dijkstra E W Have a look at the following web site for some real inspiration http://www.cs.utexas.edu/users/EWD/. Douglas BP Doing Hard Time, Developing Rea-Time Systems with UML, Addison Wesley, 1999, ISBN0-201-49837-5 Edwards, Keith. Real-Time Structured Methods: Systems Analysis, Wiley, 1993, ISBN 0-471-93415-1 Embley, Kurtz, Woodfield Object-Orientated Systems Analysis, Yourdon Press, 1992, ISBN 0-13-629973-3 Fenton et al, Software Quality Assurance and Measurement, A world wide Perspective, ITCP, 1995 ISBN1-85032-174-4 Fertuck, L, Systems Analysis and Design with CASe tools Pub WCB 1992 Gamma, Erich et al, Design Patterns: Elements of Reusable Object-Orientated Software, Addison Wesley, 1994, ISBN 0-201-63361-2 Gansel, Jack. The art of Programming Embedded Systems, Academic Press, 1992, ISBN 0-12,274880-8 Gansel Jack, The Embedded Muse Various editions. Pub Jack Gansel http://www.ganssle.com/index.htm Gerham, Moote & Cylaix, Real-Time Programming: A Guide to 32-bit Embedded Development, Addison Wesley, 1998, ISBN0-201-540-0 Goldberg & Rubin, Succeeding with Objects: Decision Frameworks for Project Management, Addison Wesley , 1995, ISBN 0-201-62878-3 Hatton Les, Safer C:Developing Software for High-integrituy and Safety Critical Systems, Mcgraw-Hill(1994) ISBN 0-07-707640-0

www.phaedsys.org page 72 of 81 19/01/2006

Page 73: Embedded MCU Debuggers

Heath , Steve, Microprocessor Architectures RISC, CISC & DSP 2nd ED, Butterworth-Heinemann 1995 ISBN 0-7506-2303-9 Heath , Steve, Embedded Systems Design, Newnes 1997 ISBN0-7506-3237-2 Hills C A, Embedded C: Traps and Pitfalls Chris Hills, Phaedrus Systems, September 1999, quest.phaedsys.org/ Hills C A, Embedded Debuggers –Chris Hills & Mike Beach, Hitex (UK) Ltd. April 1999 http://www.hitex.co.uk & quest.phaedsys.org Hills C A, Tile Hill Style Guide Chris Hills, Phaedrus Systems, 2001, quest.phaedsys.org/ Hills CA & Beach M, Hitex, SCIL-Level A paper project managers, team leaders and Engineers on the classification of embedded projects and tools. Useful for getting accountants to spend money Download from www.scil-level.org HMHO Home Office Reforming the Law on Involuntary Manslaughter : The governments Proposals www.homeoffice.gov.uk/consult/lcbill.pdf Jacobson et al, Object-Orientated Software Engineering: A Use Case Driven Apporach, Addison-Wesely, 1992, ISBN 0-201-55435-0 Johnson S. C. Johnson, ‘Lint, a Program Checker,’ in Unix Programmer’s Manual, Seventh Edition, Vol. 2B, M. D. McIlroy and B. W. Kernighan, eds. AT&T Bell Laboratories: Murray Hill, NJ, 1979. Jones A History of punched cards. Douglas W. Jones Associate Professor of Computer Science at the University of Iowa. http://www.cs.uiowa.edu/~jones/cards/index.html see also http://www.cwi.nl/~dik/english/codes/punched.html Jones, Derek. The 7+/- 2 Urban Legend. MISRA-C Conference 2002. http://www.knosof.co.uk/ Kaner, Bach & Pettichord, Lessons Learned in Software Testing, A Context Driven Approach. , Wiley, 2002 ISBN 0-471-08112-4 Kernighan Brian W & Pike , The Practice of Programming. Addison Wesley 1999 ISBN 0-201-61586-X Kerzner, Harold. Project Management: A Systems Approach to Planning, Scheduling, and Controlling. (7th ed) Wiley, 2001. ISBN 0-471-39342-8

www.phaedsys.org

Koenig A C Traps and Pitfalls, Addison Wesley, 1989

page 73 of 81 19/01/2006

Page 74: Embedded MCU Debuggers

K&R The C programming Language 2nd Ed., Prentice-Hall, 1988 Lyons. JL, Ariane 5: Flight 501 Failure. Report by the Enquiry Board , Ariane, 1996 Maric B, How to Misuse Code Coverage. Reliable Software Technologies, 1997. www.testing.com Maguire, Steve. Writing Solid Code, Microsoft Press, 1993, ISBN1-55615-551-4 McConnell Steve, Code Complete, A handbook of Practical Software Construction. Microsoft Press, 1993, ISBN 1-55615-484-4 MISRA Guidelines For The Use of The C Language in Vehicle Based Software. 1998 From http://www.misra.org.uk/ and http://www.hitex.co.uk/ Morton, Stephen. Defining a "Safe Code" Development Process, Applied Dynamics International, 2001 Murphy, Nial. Front Panel: Designing Software for Embedded User Interfaces, R&D Books 1998 ISBN 0-87930-528-2 Oram & Talbot, Managing Projects with Make 2nd Ed , O'Reilly 1993 ISBN 0-937175-90-0 Parr, Andrew, Industrial Control Handbook 3rd Ed, Newnes, 1998, ISBN0-7506-3934-2 Pressman Software Engineering A Practitioners Approach. 3rd Ed McGrawHill 1992 ISBN 0-07-050814-3 PRQA Programming Research QA-C static analysis tool. www.programmingresearch.com Randel, Brian. The Origins of Digital Computers, Springer Verlag 1973 Ritchie D. M. The Development of the C Language Bell Labs/Lucent Technologies Murray Hill, NJ 07974 USA 1993 available from his web site http://cm.bell-labs.com/cm/cs/who/dmr/index.html This is well worth reading. Rumbaugh et al, Object Orientated Modelling and Design, Prentice Hall, 1991, ISBN 0-13-630054-5

www.phaedsys.org

Simon, David, An Embedded Software Primer, Addison Wesley,1999, ISBN 0-201-61569

page 74 of 81 19/01/2006

Page 75: Embedded MCU Debuggers

Selis, Gullekson & Ward. Real-Time Object-Orientated Modeling, Wiley, 1994, ISBN 0-417-59917-4 Soligen & Berghout, The Goal/Question/Metric Method : A practical Guide for Quality Improvement of Software Development, McGraw-Hill, 1999, ISBN 0-07-709553-7 Sutter Ed. Embedded Systems: Firmware Demystified, CMP Books, 2002 ISBN 1-57820-09907 Vahid & Givargis Embedded System Design: A Unified Hardware/Software Introduction, Wiley, 2002, ISBN 0-471-38678-2 Van Vilet Software Engineering Principals and Practice Pub Wiley 1993 ISBN 0-471-93611-1BN 0-471-93611-1 Watkins, John. A Guide To Evaluating Software Testing Tools (V3) Rational Ltd 2001 Watson & McCabe, Structured Testing: A testing Methodology Using the Cyclomatic Complexity Model Webster, Bruce. The Art of Ware, Sun Tzu's Classic Work Reinterpreted, M&T Books, 1995 ISBN 1-55851-396-5 Whitehead, Richard. Leading A Software Development Team: A Developers Guide to Successfully Leading People and Projects, Addison Wesley, 2001 ISBN 0-201-67526-9 Wilson, Graham.. Embedded Systems & Computer Architecture, Newnes, 2002, ISBN 0-7506-5064-8 Xie & Engler Using Redundancies to Find Errors, Computer Systems Laboratory Stanford University: http://www.stanford.edu/~engler/p401-xie.pdf

www.phaedsys.org page 75 of 81 19/01/2006

Page 76: Embedded MCU Debuggers

10. Standards This is the full set of standards used across the whole QuEST series. These are Standards as issued by recognised national or international Standards bodies. Note due to the authors position in the Standards Process some of the documents referred to are Committee Drafts or documents that are amendments to standards that may not have been made publicly available by the time this is read. ISO 9899:1990 Programming Languages - C 9899:1999 Programming Languages - C 9899:-1999 TC1 Programming Languages-C Technical Corrigendum 1 9945 Portable Operating System Interface (POSIX) 9945-1 Base Definitions 9945-2 System Interfaces 9945-3 Shell and Utilities 9945-4 Rational 12207:1995 Information Technology- Software Life Cycle Processes 14764:1999 Information Technology - Software Maintenance 14882:1989 Programming Languages - C++ 15288:2002 Systems Engineering - System Lifecycle Processes JTC1/SC7 N2683 Systems Engineering Guide for ISO/IEC 15288 WDTR 18037.1 Programming languages, their environments and system software interfaces —Extensions for the programming language C to support embedded Processors IEC 61508 :FCD Functional Safety or Electrical/Electronic/Programmable Electronic Safety -Relegated Systems Part 1 General Requirements

www.phaedsys.org page 76 of 81 19/01/2006

Page 77: Embedded MCU Debuggers

Part 2 Requirements for Electrical/Electronic/Programmable Electronic Safety -Relegated Systems Part 3 Software Requirements Part 4 Definitions and Abbreviations Part 5 Examples of methods for the determination of SIL Part 6 Guidelines for the application of parts 2 and 3 Part 7 Over View of Technical Measures ISO/IEC JTC 1 N6981 Functional Safety and IEC61508: A basic Guide. IEEE You may be wondering where ANSI C is… ANSI C became ISO C 9899:1990 and ISO 9899 has been the International standard ever since. See "A Standard History of C" in Embedded C Traps and Pitfalls 1016-1998 Recommended Practice for Software Design Descriptions 5001:1999 The Nexus 5001 Forum™ Standard for a Global Embedded Processor Debug Interface NASA http://sel.gsfc.nasa.gov/website/documents/online-doc.htm SEL-81-305 Recommended Approach to Software Development Rev 3 SEL-84-101 Manager's Handbook for Software Development Rev 1 SEL-93-002 Cost And Schedule Estimation Study Report SEL-94-003 C Style Guide August 1994 Goddard Space Flight Centre SEL-94-005 An Overview Of The Software Engineering Laboratory SEL-94-102 Software Measurement Guidebook Revision 1 SEL-95-102 Software Process Improvement Guidebook Revision 1 SEL-98-001 COTS Study Phase 1 Initial Characterization Study Report OSEK Network Management Concept and Application Programming Interface Version 2.50 31st of May 1998 Operating System Version 2.1 revision 1 13. November 2000 OIL: OSEK Implementation Language Version 2.2 July 27th, 2000 Communication Version 2.2.2 18th December 2000 BCS

www.phaedsys.org

Standard For Software Component Testing Draft 3.3 1997

page 77 of 81 19/01/2006

Page 78: Embedded MCU Debuggers

MOD Defence Standards Def-Stan 00-13 REQUIREMENTS FOR THE ACHIEVEMENT OF TESTABILITY IN ELECTRONIC AND ALLIED EQUIPMENT Def-Stan 00-17 MODULAR APPROACH TO SOFTWARE CONSTRUCTION, OPERATION AND TEST-MASCOT Def-Stan 00-31 (obsolete) THE DEVELOPMENT OF SAFETY CRITICAL SOFTWARE FOR AIRBORNE SYSTEMS Def-Stan 00-42 part 2 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES PART 2: SOFTWARE Def-Stan 00-54 Part 1 REQUIREMENTS FOR SAFETY RELATED ELECTRONIC HARDWARE IN DEFENCE EQUIPMENT PART 1: REQUIREMENTS Def-Stan 00-54 part 2 REQUIREMENTS FOR SAFETY RELATED ELECTRONIC HARDWARE IN DEFENCE EQUIPMENT PART 2: GUIDANCE Def-Stan 00-55 Part 1 REQUIREMENTS FOR SAFETY RELATED SOFTWARE IN DEFENCE EQUIPMENT PART 1: REQUIREMENTS Def-Stan 00-55 Part 2 REQUIREMENTS FOR SAFETY RELATED SOFTWARE IN DEFENCE EQUIPMENT PART 2: GUIDANCE Def-Stan 00-56 Part 1 SAFETY MANAGEMENT REQUIREMENTS FOR DEFENCE SYSTEMS PART 1: REQUIREMENTS Def-Stan 00-56 part 2 SAFETY MANAGEMENT REQUIREMENTS FOR DEFENCE SYSTEMS PART 2: GUIDANCE Def-Stan 00-58 part 1 HAZOP Studies on Systems Containing Programmable Electronics Part 1 Requirements Def-Stan 00-58 part 2 HAZOP Studies on Systems Containing Programmable Electronics Part 2 General Application Guidance QuEST Series (see http://QuEST.phaedsys.org) QuEST 0 Design and Documentation for Embedded Systems QuEST 1 Embedded C Traps and Pitfalls

www.phaedsys.org page 78 of 81 19/01/2006

Page 79: Embedded MCU Debuggers

QuEST 2 Embedded Debuggers QuEST 3 Advanced Embedded Testing For Fun QuEST 4 C51 Primer QA1 SCIL-Level QA2 Tile Hill Embedded C Style Guide QA3 QuEST-C QA4 PC-Lint & DAC MISRA-C Compliance Matrix

www.phaedsys.org page 79 of 81 19/01/2006

Page 80: Embedded MCU Debuggers

www.phaedsys.org page 80 of 81 19/01/2006

Page 81: Embedded MCU Debuggers

quest.phaedsys.org/[email protected]

www.phaedsys.org page 81 of 81 19/01/2006