Another associated development is that the combination of huge design costs, larger available silicon real estate and device functionality convergence is causing more designs to adopt a flexible SoC-style architecture, even in "traditionally non-SoC" design teams. This creates a lot of obvious, as well as subtle, problems for successfully verifying these monster chips and systems. At this year's DAC, Special Session 14 "Verifying an SoC Monster: Whose Job is it Anyway" will shed light on some of these challenges.
The huge state space of an SoC means that verification can never really be complete; it just ends. Verification requirements for high coverage--exposing intricate corner cases and cross-domain functionality with full debug visibility--ensure that simulation will continue as a workhorse for block, cluster and basic chip-level verification for years to come. New technologies that increase the efficiency of the verification process will become a key ingredient in successful verification efforts. This will force innovation and investment in many areas, including better techniques and tools to simplify the design process and to prevent bugs from getting into designs in the first place; standard methods propagating best practices and enabling reuse; innovative bug-finding technologies; more automated flows for block-level verification and coverage convergence; enhanced visualization and failure analysis capabilities; integrated low-power and mixed- signal verification flows; and domain-specific verification automation.
Another often-overlooked area where innovative technology and automation should reduce the workload of the engineer is the debug, diagnosis and triage effort in front-end verification, which typically consumes over 30 percent of engineering time and resources.
The performance and capacity required to successfully verify chips are now straining verification tools and IT infrastructure to the limit, having an impact on both the cost and the productivity of the verification process. Until recently, tool performance benefited from the seemingly inexorable march of single-threaded microprocessor performance. However, single-threaded improvements are now a thing of the past and multicore (and many-core) throughput-based computing is the "new normal"--this is a major paradigm shift for verification tool providers and system developers. For one, not all verification algorithms lend themselves well to leveraging a multicore architecture, thus a huge amount of algorithmic innovation is required to harness the power of the underlying hardware. Another challenge is the relative novelty of the tools, languages and framework for software development on multicore architectures. The third problem is the large amounts of legacy code in current widely deployed EDA tools. It will be a major challenge for vendors to retool their software and for system designers to architect their verification processes to take full advantage of multicore architectures for maximum throughput.