Embedded systems a hardware software codesign approach
About this book Introduction Embedded systems are informally defined as a collection of programmable parts surrounded by ASICs and other standard components, that interact continuously with an environment through sensors and actuators.
Embedded systems are often used in life-critical situations, where reliability and safety are more important criteria than performance. Today, embedded systems are designed with an ad hoc approach that is heavily based on earlier experience with similar products and on manual design.
Use of higher-level languages such as C helps structure the design somewhat, but with increasing complexity it is not sufficient. Formal verification and automatic synthesis of implementations are the surest ways to guarantee safety. Thus, the POLIS system which is a co-design environment for embedded systems is based on a formal model of computation.
It now takes longer for the hardware engineers to create the first physical prototype, which means the schedule gap has grown a bit, but when the prototype does show up, it is often in much better shape than it was in the past. In fact, the chances are good that the first prototype will be reasonably bug-free, something that certainly could rarely be said before simulation was available. But we keep getting back to that pesky gap in the schedule. That gap isn't a real problem in our hypothetical one-person project because that lone engineer can concentrate on getting the hardware working and then do the software, but that model doesn't really work well on larger projects with separate hardware and software groups.
So what does? Alternatives for early target access There are a few techniques available currently to allow a smoother transition into full system integration when the hardware is available. I have listed some of these techniques below, along with some notes from my own experience.
Port to similar, readily-available hardware first. One option in some circumstances involves doing initial development of the software on a readily-available target system that is as similar as possible to the eventual target system. This technique was used on a telecommunications system I worked on fairly recently.
The software effort was anticipated to be extensive, and the company realized early in the project that there would be a significant delay before hardware would be available. We made a conscious decision to model our custom-made hardware after a commercially-available VME board. We put some effort early in the project into creating VSB add-on boards that would support the missing peripherals. The effort was very worthwhile in this particular instance.
On the software side, we had a stable platform to work on much earlier than we would have otherwise. We used this platform to debug, test, and optimize the code such that it was reasonably solid by the time we got to system integration. This resulted in significant time savings for the overall project. But this technique is far from a generic cure for the SWHG gap. The combination of circumstances that allowed this technique to be used in this case included the ready availability of a suitably similar target system, a budget that was large enough to support the overhead expense of building the extra hardware to support the commercial system, and the time to do the eventual port to the actual target hardware.
But on a project that can meet these parameters, there can be a significant savings in the overall schedule time for the project. Strictly segregate the software design. This step should be taken on any project, as we are learning with large embedded systems. Not only does this distribution make the code more difficult to debug before the actual hardware becomes available, but it poses a significant barrier to code reuse when the hardware eventually changes.
Figure 2 represents a software design that distributes detailed knowledge of hardware interfaces throughout the code. This distribution makes it very difficult to debug the interface, because it can get hammered from so many different directions. Also, if the particular interface chip ever changes, this distribution of knowledge will mean a significant rewrite of many sections of code to adapt to the change.
Figure is a much better design. If the details of the interface change over the life of the system, as such details are wont to do, it is relatively simple to make the changes in the software. Granted, there is a little more overhead involved in the design shown in Figure 3.
Overhead has long been considered a deadly sin in embedded systems, one to be avoided at all costs. This situation was a tradeoff we could afford in the days of slow CPUs and tightly-restricted memory, because the code we were writing for those systems was so much less complex than what we are being called on to create these days. It is still important to write tight, efficient code, but that code must be reusable, robust, and on schedule. Actually, I suspect most readers out there who are working on medium or large embedded systems are already very familiar with this technique.
The recent increased interest in real-time operating systems RTOSes represents a more mature approach to software development for embedded systems than we have seen in the past, where RTOS software was either home-grown or nonexistent.
Most of the projects I have been involved with over the last few years have had a significant segregation in the software group between the applications programmers and the systems programmers, which allows each to create their part of the system without having to worry as much about the other's turf, leading to better overall system design, extra overhead, and so on.
This is a fairly specialized technique, but it can be a very good one. I ran into this situation a couple of years ago on a small project where I was tasked to develop device drivers for a couple of interfaces to a Motorola system, the hardware for which was still being developed.
This particular job offered a bonus for on-time completion, so I was especially motivated to meet the ambitious schedule. I decided to use these boards to develop and test the device drivers, this being a much more efficient use of my time than bothering the hardware engineers while they did their thing.
As it turned out, the project was one of the more satisfying ones I have worked on. I had time to fine-tune the drivers without the pressure of being on the critical path for the project, and the eventual port to the target was finished well ahead of schedule. And the bonus money was icing on the cake. As I said, this technique is a somewhat specialized one. But the technique illustrates that there are sometimes options available to inventive programmers, especially now that we are programming in high-level languages to more generic OS interfaces.
In most systems the details of working with a specific CPU are becoming a much smaller part of the application, and that can offer chances for us to be productive for a longer part of a shorter project development cycle. Simulation More than a few of you are probably wondering about that hardware simulation capability I referred to earlier.
The question that may have come to mind is why can't we run our software on that hardware simulation, instead of waiting for the real hardware to pop out the back end? This same question has occurred to people at some of those EDA companies, people who are very interested in making their libraries of hardware simulations more valuable. There is still a significant gap between the hardware design tools and those of the software side, but in some respects they are solving very similar problems.
Simulation is a very good example. Some RTOS vendors have been selling simulation software for the last few years, but in my experience it hasn't seen much use. On top of that, the simulations that do full emulation of the target CPU tend to be extremely slow. It was almost a relief to get onto that buggy first hardware prototype, because at least the thing ran and failed in real-time.
But if these extensive system hardware simulations being used by the hardware groups could run actual code reasonably fast, that could be a tremendous boost for systems development. That bell has been rung by a company named Eagle Design Automation, and the resulting product is available today.
I discussed this product with Eagle's President Gordon Hoffman while doing background research for this article. The simulation results that have been achieved by Eagle are fairly impressive. I say fairly impressive, because there are a few caveats in their approach.
They do not provide a full simulation of the CPU, opting instead for running C code on the simulation host in native mode. This approach to system simulation is shown in Figure 4. I am not faulting them for this approach, far from it.
Theirs is potentially a useful tool that could be a significant improvement on the other early-access schemes discussed above. This tool allows access to simulations of custom hardware much earlier than any of the techniques previously described, and provides that access in a controllable workstation environment.
But there are a couple of potential flaws in this approach. For example, it would not be possible to get completely accurate timing information out of the system simulation. To get exact cycle counts for the target CPU, you need to provide a full simulation of the target CPU, one that you can run native code on. Of course, the tradeoff here is that such a full simulation takes much longer to develop for a CPU than the two to four weeks per CPU that Eagle claims for their current simulations.
Given the rate that CPUs are being developed these days, it could be a daunting task just to keep up. I found the discussion quite enlightening. A primary goal of the merger was to interface the hardware simulation libraries of Mentor with the CPU simulation capability and software interface of MRI's products. This situation could provide complete system simulation, allowing full disclosure of timing and debug information to both the hardware and software engineers.
Designers often strive to make everything fit in software, and off-load only some parts of the design to hardware to meet timing constraints. A priori definition of partitions, which leads to sub-optimal designs. Lack of a well-defined design flow, which makes specification revision difficult, and directly impacts time-to-market. There are many different academic approaches to try to solve the problem of embedded system design. In our opinion, none of them address satisfactorily the issues of unbiased specification and efficient automated synthesis for control-intensive reactive real-time systems.
Therefore, we are developing a methodology for specification, automatic synthesis, and validation of this sub-class of embedded systems that includes the examples described above.
Design is done in a unified framework, POLIS , with a unified hardware-software representation, so as to prejudice neither hardware nor software implementation. This model is maintained throughout the design process, in order to preserve the formal properties of the design.