tvnovellas.info Readers JAN M RABAEY DIGITAL INTEGRATED CIRCUITS PDF

JAN M RABAEY DIGITAL INTEGRATED CIRCUITS PDF

Sunday, September 1, 2019 admin Comments(0)

Jan M. Rabaey, Anantha Chandrakasan, and Borivoje Nikolic´ Chapter 8: Designing Complex Digital Integrated Circuits (pdf) - Not yet. Download Digital Integrated Circuits: A Design Perspective By Jan M Rabaey – Progressive in content and form, this practical book successfully bridges the gap . Jan m. rabaey digital integrated circuits, a design perspective-prentice hall ( ). 1. tvnovellas.info Page 1 Thursday, August 17, PM.


Author:JACK NESLAND
Language:English, Spanish, German
Country:Pakistan
Genre:Art
Pages:132
Published (Last):16.10.2015
ISBN:658-9-34412-940-2
ePub File Size:25.56 MB
PDF File Size:20.45 MB
Distribution:Free* [*Sign up for free]
Downloads:46858
Uploaded by: ROSANN

Digital Integrated Circuits (2nd Edition)- Jan M. Rabaey. Tw3rp P. tvnovellas.info Page 9 Friday, January 18, AM CHAPTER 1 INTRODUCTION The. Issues in Digital Integrated Circuit Design. Quality Metrics of a Digital Design. Cost of an Integrated Circuit. Functionality and Robustness. Digital Integrated Circuits. © Prentice Hall Introduction. Jan M. Rabaey. Digital Integrated Circuits. A Design Perspective.

View larger. Request a copy. Download instructor resources. Additional order info. download this product. Intended for use in undergraduate senior-level digital circuit design courses with advanced material sufficient for graduate-level courses. Progressive in content and form, this text successfully bridges the gap between the circuit perspective and system perspective of digital integrated circuit design.

Download instructor resources. Additional order info. download this product. Intended for use in undergraduate senior-level digital circuit design courses with advanced material sufficient for graduate-level courses. Progressive in content and form, this text successfully bridges the gap between the circuit perspective and system perspective of digital integrated circuit design.

Beginning with solid discussions on the operation of electronic devices and in-depth analysis of the nucleus of digital design, the text maintains a consistent, logical flow of subject matter throughout. The revision addresses today's most significant and compelling industry topics, including: The revision reflects the ongoing evolution in digital integrated circuit design, especially with respect to the impact of moving into the deep-submicron realm.

A Historical Perspective. Issues in Digital Integrated Circuit Design. Quality Metrics of a Digital Design. Packaging Integrated Circuits. Perspective—Trends in Process Technology.

Rabaey jan integrated circuits digital pdf m

The Diode. A Word on Process Variations. Technology Scaling. A First Glance. Interconnect Parameters—Capitance, Resistance, and Inductance. Electrical Wire Models. A Look into the Future. The Static Behavior. The Dynamic Behavior. Power, Energy, and Energy-Delay. How to Choose a Logic Style? Timing Metrics for Sequential Circuits. Classification of Memory Elements. Static Latches and Registers. Dynamic Latches and Registers. Pulse Registers. Sense-Amplifier Based Registers. An Approach to Optimize Sequential Circuits.

Non-Bistable Sequential Circuits.

Digital integrated circuits pdf rabaey jan m

Choosing a Clocking Strategy. Custom Circuit Design. Cell-Based Design Methodology. Array-Based Implementation Approaches. Perspective—The Implementation Platform of the Future. Capacitive Parasitics. Resistive Parasitics.

Inductive Parasitics. Advanced Interconnect Techniques. Timing Classification of Digital Systems. Self-Timed Circuit Design.

Digital Integrated Circuits, 2nd Edition

Synchronizers and Arbiters. Future Directions and Perspectives. Datapaths in Digital Processor Architectures. No notes for slide. One has long grown accustomed to the idea of digital computers. Evolving steadily from mainframe and minicomputers, personal and laptop computers have proliferated into daily life.

More significant, however, is a continuous trend towards digital solutions in all other areas of electronics. Instrumentation was one of the first noncomputing domains where the potential benefits of digital data manipulation over analog processing were recognized. Other areas such as control were soon to follow. Only recently have we witnessed the conversion of telecommunications and consumer electronics towards the digital format.

Increasingly, telephone data is transmitted and processed digitally over both wired and wireless networks. The compact disk has revolutionized the audio world, and digital video is following in its footsteps. The idea of implementing computational engines using an encoded data format is by no means an idea of our times.

In the early nineteenth century, Babbage envisioned largescale mechanical computing devices, called Difference Engines [Swade93]. Although these engines use the decimal number system rather than the binary representation now common in modern electronics, the underlying concepts are very similar. The Analytical Engine, developed in , was perceived as a general-purpose computing machine, with features strikingly close to modern computers.

It even used pipelining to speed up the execution of the addition operation! Unfortunately, the complexity and the cost of the designs made the concept impractical. For instance, the design of Difference Engine I part of which is shown in Figure 1. Figure 1. Early digital electronics systems were based on magnetically controlled switches or relays. They were mainly used in the implementation of very simple logic networks.

Examples of such are train safety systems, where they are still being used at present. The age of digital electronic computing only started in full with the introduction of the vacuum tube. While originally used almost exclusively for analog processing, it was realized early on that the vacuum tube was useful for digital computations as well. Soon complete computers were realized. It became rapidly clear, however, that this design technology had reached its limits.

Reliability problems and excessive power consumption made the implementation of larger engines economically and practically infeasible. All changed with the invention of the transistor at Bell Telephone Laboratories in [Bardeen48], followed by the introduction of the bipolar transistor by Schockley in [Schockley49] 1. It took till before this led to the first bipolar digital logic gate, introduced by Harris [Harris56], and even more time before this translated into a set of integrated-circuit commercial logic gates, called the Fairchild Micrologic family [Norman60].

Other logic families were devised with higher performance in mind. Examples of these are the current switching circuits that produced the first subnanosecond digital gates and culminated in the ECL Emitter-Coupled Logic family [Masaki74], which is discussed in more detail in this textbook. TTL had the advantage, however, of offering a higher integration density and was the basis of the first integrated circuit revolution.

In fact, the manufacturing of TTL components is what spear-headed the first large semiconductor companies such as Fairchild, National, and Texas Instruments. The family was so successful that it composed the largest fraction of the digital semiconductor market until the s. Ultimately, bipolar digital logic lost the battle for hegemony in the digital design world for exactly the reasons that haunted the vacuum tube approach: Although attempts were made to develop high integration density, low-power bipolar families such as I2 L—Integrated Injection Logic [Hart72] , the torch was gradually passed to the MOS digital integrated circuit approach.

Lilienfeld Canada as early as , and, independently, by O. Heil in England in Insufficient knowledge of the materials and gate stability problems, however, delayed the practical usability of the device for a long time.

Once these were solved, MOS digital integrated circuits started to take off in full in the early s. The complexity of the manufac1 An intriguing overview of the evolution of digital integrated circuits can be found in [Murphy93]. Most of the data in this overview has been extracted from this reference. Instead, the first practical MOS integrated circuits were implemented in PMOS-only logic and were used in applications such as calculators. The second age of the digital integrated circuit revolution was inaugurated with the introduction of the first microprocessors by Intel in the and the [Shima74].

Simultaneously, MOS technology enabled the realization of the first high-density semiconductor memories. These events were at the start of a truly astounding evolution towards ever higher integration densities and speed performances, a revolution that is still in full swing right now. The road to the current levels of integration has not been without hindrances, however. In the late s, NMOS-only logic started to suffer from the same plague that made high-density bipolar logic unattractive or infeasible: This realization, combined with progress in manufacturing technology, finally tilted the balance towards the CMOS technology, and this is where we still are today.

Digital Integrated Circuits, 2nd Edition

Interestingly enough, power consumption concerns are rapidly becoming dominant in CMOS design as well, and this time there does not seem to be a new technology around the corner to alleviate the problem. Although the large majority of the current integrated circuits are implemented in the MOS technology, other technologies come into play when very high performance is at stake.

BiCMOS is used in high-speed memories and gate arrays. When even higher performance is necessary, other technologies emerge besides the already mentioned bipolar silicon ECL family—Gallium-Arsenide, Silicon-Germanium and even superconducting technologies. These technologies only play a very small role in the overall digital integrated circuit design scene. With the ever increasing performance of CMOS, this role is bound to be further reduced with time.

Hence the focus of this textbook on CMOS only. In the s, Gordon Moore, then with Fairchild Corporation and later cofounder of Intel, predicted that the number of transistors that can be integrated on a single die would grow exponentially with time. Its validity is best illustrated with the aid of a set of graphs. As can be observed, integration complexity doubles approximately every 1 to 2 years.

As a result, memory density has increased by more than a thousandfold since An intriguing case study is offered by the microprocessor. From its inception in the early seventies, the microprocessor has grown in performance and complexity at a steady and predictable pace. The number of transistors and the clock frequency for a number of landmark designs are collected in Figure 1.

Clock frequencies double every three years and have reached 5. This is illustrated in Figure 1. An important observation is that, as of now, these trends have not shown any signs of a slow-down.

It should be no surprise to the reader that this revolution has had a profound impact on how digital circuits are designed. Early designs were truly hand-crafted.

Every transistor was laid out and optimized individually and carefully fitted into its environment. This is adequately illustrated in Figure 1. This approach is, obviously, not appropriate when more than a million devices have to be created and assembled.

With the rapid evolution of the design technology, time-to-market is one of the crucial factors in the ultimate success of a component. Observe howthe fraction of the transistors is being devoted to memory is increasing over time [Young99]. Designers have, therefore, increasingly adhered to rigid design methodologies and strategies that are more amenable to design automation. The impact of this approach is apparent from the layout of one of the later Intel microprocessors, the Pentium, shown in Figure 1.

Instead of the individualized approach of the earlier designs, a circuit is constructed in a hierarchical way: Cells are reused as much as possible to reduce the design effort and to enhance the chances for a first-time-right implementation. The fact that this hierarchical approach is at all possible is the key ingredient for the success of digital circuit design and also explains why, for instance, very large scale analog design has never caught on. The obvious next question is why such an approach is feasible in the digital world and not or to a lesser degree in analog designs.

The crucial concept here, and the most important one in dealing with the complexity issue, is abstraction. At each design level, the internal details of a complex module can be abstracted away and replaced by a black box view or model. This model contains virtually all the information needed to deal with the block at the next level of hierarchy. For instance, once a designer has implemented a multiplier module, its performance can be defined very accurately and can be captured in a model.

The performance of this multiplier is in general only marginally influenced by the way it is utilized in a larger system. For all purposes, it can hence be considered a black box with known characteristics. As there exists no compelling need for the system 7. The impact of this divide and conquer approach is dramatic.

Instead of having to deal with a myriad of elements, the designer has to consider only a handful of components, each of which are characterized in performance and cost by a small number of parameters. Someone writing a large program does not bother to look inside those library routines. The only thing he cares about is the intended result of calling one of those modules.

Typically used abstraction levels in digital circuit design are, in order of increasing abstraction, the device, circuit, gate, functional module e. No circuit designer will ever seriously consider the solid-state physics equations governing the behavior of the device when designing a digital gate.

Instead he will use a simplified model that adequately describes the input-output behavior of the transistor. For instance, an AND gate is adequately described by its Boolean expres- 9. B , its bounding box, the position of the input and output terminals, and the delay between the inputs and the output. This design philosophy has been the enabler for the emergence of elaborate computer-aided design CAD frameworks for digital integrated circuits; without it the current design complexity would not have been achievable.

Design tools include simulation at the various complexity levels, design verification, layout generation, and design synthesis. An overview of these tools and design methodologies is given in Chapter 11 of this textbook.

Furthermore, to avoid the redesign and reverification of frequently used cells such as basic gates and arithmetic and memory modules, designers most often resort to cell libraries. These libraries contain not only the layouts, but also provide complete documentation and characterization of the behavior of the cells. The use of cell libraries is, for instance, apparent in the layout of the Pentium processor Figure 1.

The integer and floating-point unit, just to name a few, contain large sections designed using the so-called standard cell approach. In this approach, logic gates are placed in rows of cells of equal height and interconnected using routing channels. The layout of such a block can be generated automatically given that a library of cells is available.

The preceding analysis demonstrates that design automation and modular design practices have effectively addressed some of the complexity issues incurred in contemporary digital design.

This leads to the following pertinent question. If design automation solves all our design problems, why should we be concerned with digital circuit design at all? Will the next-generation digital designer ever have to worry about transistors or parasitics, or is the smallest design entity he will ever consider the gate and the module?

The truth is that the reality is more complex, and various reasons exist as to why an insight into digital circuits and their intricacies will still be an important asset for a long time to come. Semiconductor technologies continue to advance from year to year. For instance, to identify the dominant performance parameters of a given design, one has to recognize the critical timing path first.

This is the case for a large number of application-specific designs, where the main goal is to provide a more integrated system solution, and performance requirements are easily within the capabilities of the technology. Unfortunately for a large number of other products such as microprocessors, success hinges on high performance, and designers therefore tend to push technology to its limits.

At that point, the hierarchical approach tends to become somewhat less attractive.

The performance of, for instance, an adder can be substantially influenced by the way it is connected to its environment. The interconnection wires themselves contribute to delay as they introduce parasitic capacitances, resistances and even inductances. The impact of the interconnect parasitics is bound to increase in the years to come with the scaling of the technology. Some design entities tend to be global or external to resort anew to the software analogy.

Examples of global factors are the clock signals, used for synchronization in a digital design, and the supply lines. Increasing the size of a digital design has a profound effect on these global signals. For instance, connecting more cells to a supply line can cause a voltage drop over the wire, which, in its turn, can slow down all the connected cells.

Issues such as clock distribution, circuit synchronization, and supply-voltage distribution are becoming more and more critical. Coping with them requires a profound understanding of the intricacies of digital circuit design.

A typical example of this is the periodical reemergence of power dissipation as a constraining factor, as was already illustrated in the historical overview. Another example is the changing ratio between device and interconnect parasitics. To cope with these unforeseen factors, one must at least be able to model and analyze their impact, requiring once again a profound insight into circuit topology and behavior. A fabricated circuit does not always exhibit the exact waveforms one might expect from advance simulations.

Deviations can be caused by variations in the fabrication process parameters, or by the inductance of the package, or by a badly modeled clock signal. Troubleshooting a design requires circuit expertise. For all the above reasons, it is my belief that an in-depth knowledge of digital circuit design techniques and approaches is an essential asset for a digital-system designer.

Even though she might not have to deal with the details of the circuit on a daily basis, the understanding will help her to cope with unexpected circumstances and to determine the dominant effects when analyzing a design.

Example 1. The function of the clock signal in a digital design is to order the multitude of events happening in the circuit. This task can be compared to the function of a traffic light that determines which cars are allowed to move.

It also makes sure that all operations are completed before the next one starts—a traffic light should be green long enough to allow a car or a pedestrian to cross the road. Under ideal circumstances, the clock signal is a periodic step waveform with abrupt transitions between the low and the high values Figure 1. Consider, for instance, the circuit configuration of Figure 1. This sampled value is preserved and appears at the output until the clock rises anew and a new input is sam- Under normal circuit operating conditions, this is exactly what happens, as demonstrated in the simulated response of Figure 1.

When the degeneration is within bounds, the functionality of the latch is not impacted. When these bounds are exceeded the latch suddenly starts to malfunction as shown in Figure 1.

The output signal makes unexpected transitions at the falling clock edge, and extra spikes can be observed as well.

M pdf circuits rabaey jan integrated digital

Propagation of these erroneous values can cause the digital system to go into a unforeseen mode and crash. This example clearly shows how global effects, such as adding extra load to a clock, can change the behavior of an individual module.

Observe that the effects shown are not universal, but are a property of the register circuit used. Besides the requirement of steep edges, other constraints must be imposed on clock signals to ensure correct operation. A second requirement related to clock alignment, is illustrated in Figure 1. This is confirmed by the simulations shown in Figure 1. Due to delays associated with routing the clock wires, it may happen that the clocks become misaligned with respect to each other.

As a result, the registers are interpreting time indicated by the clock signal differently. If the time it takes to propagate the output of the first register to the input of the second is smaller than the clock delay, the latter will sample the wrong value.

Clock misalignment, or clock skew, as it is normally called, is another example of how global signals may influence the functioning of a hierarchically designed system. Clock skew is actually one of the most critical design problems facing the designers of large, high-performance systems.

The purpose of this textbook is to provide a bridge between the abstract vision of digital design and the underlying digital circuit and its peculiarities. While starting from a solid understanding of the operation of electronic devices and an in-depth analysis of the nucleus of digital design—the inverter—we will gradually channel this knowledge into the design of more complex entities, such as complex gates, datapaths, registers, controllers, and memories.

The persistent quest for a designer when designing each of the mentioned modules is to identify the dominant design parameters, to locate the section of the design he should focus his optimizations on, and to determine the specific properties that make the module under investigation e.

These properties help to quantify the quality of a design from different perspectives: Which one of these metrics is most important depends upon the application. For instance, pure speed is a crucial property in a compute server. On the other hand, energy consumption is a dominant metric for hand-held mobile applications such as cell phones.

The introduced properties are relevant at all levels of the design hierarchy, be it system, chip, module, and gate. To ensure consistency in the definitions throughout the design hierarchy stack, we propose a bottom-up approach: Fixed Cost The fixed cost is independent of the sales volume, the number of products sold. An important component of the fixed cost of an integrated circuit is the effort in time and manpower it takes to produce the design.

This design cost is strongly influenced by the complexity of the design, the aggressiveness of the specifications, and the productivity of the designer. Advanced design methodologies that automate major parts of the design process can help to boost the latter. Bringing down the design cost in the presence of an everincreasing IC complexity is one of the major challenges that is always facing the semiconductor industry. The Design Additionally, one has to account for the indirect costs, the company overhead that cannot be billed directly to one product.

Variable Cost This accounts for the cost that is directly attributable to a manufactured product, and is hence proportional to the product volume. Variable costs include the costs of the parts used in the product, assembly costs, and testing costs. This also explains why it makes sense to have large design team working for a number of years on a hugely successful product such as a microprocessor.

While the cost of producing a single transistor has dropped exponentially over the past decades, the basic variable-cost equation has not changed: Upon completion of the fabrication, the wafer is chopped into dies, which are then individually packaged after being tested. We will focus on the cost of the dies in this discussion. The cost of packaging and test is the topic of later chapters. Single die Figure 1. Each square represents a die. The die cost depends upon the number of good die on a wafer, and the percentage of those that are functional.

The latter factor is called the die yield. The actual situation is somewhat more complicated as wafers are round, and chips are square. Dies around the perimeter of the wafer are therefore lost. The size of the wafer has been steadily increasing over the years, yielding more dies per fabrication run. The actual relation between cost and area is more complex, and depends upon the die yield.

Both the substrate material and the manufacturing process introduce faults that can cause a chip to fail. Assuming that the defects are randomly distributed over the wafer, and that the yield is inversely proportional to the complexity of the fabrication process, we obtain the following expression of the die yield: The defects per unit area is a measure of the material and process induced faults.

A value between 0. Determine the die yield of this CMOS process run. The number of dies per wafer can be estimated with the following expression, which takes into account the lost dies around the perimeter of the wafer.

The die yield can be computed with the aid of Eq. This means that on the average only 40 of the dies will be fully functional. The bottom line is that the number of functional of dies per wafer, and hence the cost per die is a strong function of the die area. While the yield tends to be excellent for the smaller designs, it drops rapidly once a certain threshold is exceeded.

Bearing in mind the equations derived above and the typical parameter values, we can conclude that die costs are proportional to the fourth power of the area: Small area is hence a desirable property for a digital gate.

The smaller the gate, the higher the integration density and the smaller the die size. Smaller gates furthermore tend to be faster and consume less energy, as the total gate capacitance—which is one of the dominant performance parameters—often scales with the area. The number of transistors in a gate is indicative for the expected implementation area.

Other parameters may have an impact, though. For instance, a complex interconnect pattern between the transistors can cause the wiring area to dominate. The gate complexity, as expressed by the number of transistors and the regularity of the interconnect structure, also has an impact on the design cost.

Complex structures are harder to implement and tend to take more of the designers valuable time. Simplicity and regularity is a precious property in cost-sensitive designs.

The measured behavior of a manufactured circuit normally deviates from the One reason for this aberration are the variations in the manufacturing process. The dimensions, threshold voltages, and currents of an MOS transistor vary between runs or even on a single wafer or die. The electrical behavior of a circuit can be profoundly affected by those variations. The presence of disturbing noise sources on or off the chip is another source of deviations in circuit response.

Some examples of digital noise sources are depicted in Figure 1. For instance, two wires placed side by side in an integrated circuit form a coupling capacitor and a mutual inductance. Hence, a voltage or current change on one of the wires can influence the signals on the neighboring wire. Noise on the power and ground rails of a gate also influences the signal levels in the gate. Most noise in a digital system is internally generated, and the noise value is proportional to the signal swing.

Capacitive and inductive cross talk, and the internally-generated power supply noise are examples of such. Other noise sources such as input power supply noise are external to the system, and their value is not related to the signal levels.

For these sources, the noise level is directly expressed in Volt or Ampere. Noise sources that are a function of the signal level are better expressed as a fraction or percentage of the signal level.

Noise is a major concern in the engineering of digital circuits. How to cope with all these disturbances is one of the main challenges in the design of high-performance digital circuits and is a recurring topic in this book.

VD D v t i t a Inductive coupling Figure 1. The definition and derivation of these parameters requires a prior understanding of how digital signals are represented in the world of electronic circuits. Digital circuits DC perform operations on logical or Boolean variables. A logical variable x can only assume two discrete values: In a physical implementation, such a variable is represented by an electrical quantity. This is most often a node voltage that is not discrete but can adopt a continuous range of values.

This electrical voltage is turned into a discrete variable by associating a nominal voltage level with each logic state: The difference between the two is called the logic or signal swing Vsw.

An example of an inverter VTC is shown in Figure 1. The gate threshold voltage presents the midpoint of the switching characteristics, which is obtained when the output of a gate is short-circuited to the input.

This point will prove to be of particular interest when studying circuits with feedback also called sequential circuits. Even if an ideal nominal value is applied at the input of a gate, the output signal often deviates from the expected nominal value.

These deviations can be caused by noise or by the loading on the output of the gate i. Steady-state signals should avoid this region if proper circuit operation is to be ensured.

It is obvious that the margins should be larger than 0 for a digital circuit to be functional and by preference should be as large as possible.

Regenerative Property A large noise margin is a desirable, but not sufficient requirement. Assume that a signal is disturbed by noise and differs from the nominal voltage levels.

As long as the signal is within the noise margins, the following gate continues to function correctly, although its output voltage varies from the nominal one.

This deviation is added to the noise injected at the output node and passed to the next gate. The effect of different noise sources may accumulate and eventually force a signal level into the undefined region. This property can be understood as follows: The input signal to the chain is a step-waveform with a degraded amplitude, which could be caused by noise.

Instead of swinging from rail to rail, From the simulation, it can be observed that this deviation rapidly disappears, while progressing through the chain; v1, for instance, extends from 0. The inverter used in this example clearly possesses the regenerative property. The conditions under which a gate is regenerative can be intuitively derived by analyzing a simple case study. Assume that a voltage v0 , deviating from the nominal voltages, is applied to the first inverter in the chain.

The signal voltage gradually converges to the nominal signal after a number of inverter stages, as indicated by the arrows. In Figure 1. Hence, the characteristic is nonregenerative. The difference between the two cases is due to the gain characteristics of the gates.

To be regenerative, the VTC should have a transient region or undefined region with a gain greater than 1 in absolute value, bordered by the two legal zones, where the gain should be smaller than 1. Such a gate has two stable operating points. This clarifies the definition of the VI H and the V IL levels that form the boundaries between the legal and the transient zones.

Noise Immunity While the noise margin is a meaningful means for measuring the robustness of a circuit against noise, it is not sufficient. Noise immunity, on the other hand, expresses the ability of the system to pro- Many digital circuits with low noise margins have very good noise immunity because they reject a noise source rather than overpower it.

To study the noise immunity of a gate, we have to construct a noise budget that allocates the power budget to the various power sources.

Rabaey, Chandrakasan & Nikolic, Digital Integrated Circuits, 2nd Edition | Pearson

We assume, for the sake of simplicity, that the noise margin equals half the signal swing for both H and L. To operate correctly, the noise margin has to be larger than the sum of the noise values. On the other hand, the impact of the internal sources is strongly dependent upon the noise suppressing capabilities of the gates, i. In later chapters, we will discuss some differential logic families that suppress most of the internal noise, and hence can get away with very small noise margins and signal swings.

Directivity The directivity property requires a gate to be unidirectional, that is, changes in an output level should not appear at any unchanging input of the same circuit. If not, an output-signal transition reflects to the gate inputs as a noise signal, affecting the signal integrity.

In real gate implementations, full directivity can never be achieved. Some feedback of changes in output levels to the inputs cannot be avoided.

[PDF] Digital Integrated Circuits: A Design Perspective By Jan M Rabaey Book Free Download

Capacitive coupling between inputs and outputs is a typical example of such a feedback. It is important to minimize these changes so that they do not affect the logic levels of the input signals. Fan-In and Fan-Out The fan-out denotes the number of load gates N that are connected to the output of the driving gate Figure 1.

Increasing the fan-out of a gate can affect its logic output levels. From the world of analog amplifiers, we know that this effect is minimized by making the input resistance of the load gates as large as possible minimizing the input currents and by keeping the output resistance of the driving gate small reducing the effects of load currents on the output voltage. When the fan-out is large, the added load can deteriorate the dynamic performance of the driving gate.

For these reasons, many generic and library The fan-in of a gate is defined as the number of inputs to the gate Figure 1. Gates with large fan-in tend to be more complex, which often results in inferior static and dynamic properties.

M N b Fan-in M Figure 1. The ideal inverter model is important because it gives us a metric by which we can judge the quality of actual implementations. Its VTC is shown in Figure 1. The input and output impedances of the ideal gate are infinity and zero, respectively i.

The values of the dc-parameters are derived from inspection of the graph. Performance From a system designers perspective, the performance of a digital circuit expresses the computational load that the circuit can manage.

For instance, a microprocessor is often characterized by the number of instructions it can execute per second. This performance metric depends both on the architecture of the processor—for instance, the number of instructions it can execute in parallel—, and the actual design of logic circuitry.

While the former is crucially important, it is not the focus of this text book. We refer the reader to the many excellent books on this topic [for instance, Patterson96]. When focusing on the pure design, performance is most often expressed by the duration of the clock period clock cycle time , or its rate clock frequency.

The minimum value of the clock period for a given technology and design is set by a number of factors such as the time it takes for the signals to propagate through the logic, the time it takes to get the data in and out of the Each of these topics will be discussed in detail on the course of this text book.

At the core of the whole performance analysis, however, lays the performance of an individual gate. The propagation delay t p of a gate defines how quickly it responds to a change at its input s.

It expresses the delay experienced by a signal when passing through a gate. The tpLH defines the response time of the gate for a low to high or positive output transition, while tpHL refers to a high to low or negative transition. The propagation delay tp is defined as the average of the two. Observe that the propagation delay tp , in contrast to tpLH and tpHL, is an artificial gate quality metric, and has no physical meaning per se.

Circuits rabaey jan integrated pdf digital m

It is mostly used to compare different semiconductor technologies, or logic design styles. The propagation delay is not only a function of the circuit technology and topology, but depends upon other factors as well.