In pursuit of the meaning of "Set of correctness properties" with respect to the SPARC-V9 Standard Reference Model, I continue to read Edsger W. Dijkstra's "A Discipline of Programming", by this time having reached Chapter 13. The FSS project is slowed by three things: its currently small percentage of the Cosmic Horizon time allocation, my intensified job search and the reading of this book. I'm not at all convinced of the relevance of the book to the project. It seems to me that the set of correctness properties is nothing more than "The SPARC Architecture Manual", Version 9, and that the reference model is to be merely a representation of that. I borrowed the Dijkstra book from the University of Texas libraries. I'll stop this silliness when they want the book back, when I find what I'm looking for or when I finish reading the whole book. I keep hoping UT will recall the book.

As I always do to write this blog, I read my notes since the last blog item on the same topic. This time, they constitute an aside. While straightening my desk, I came across various papers I had written or read and wanted to archive them. I wrote some notes to help me find the papers later if necessary, and at the same time wrote about my interest in those papers.

First, there was "Fast Binary Multiplier Design", a lab report I wrote in 1998 during my undergraduate studies at Rochester Institute of Technology. The course was Digital Systems Design for Computer Engineers. The ideas I developed during completion of this lab form the basis for the integer multiplier (MULX) in the Sputnik microprocessor.

A magazine article led to several other papers found on my desk. Keep in mind that I read these papers a few years ago; the same can be said of my logic below. I'm just now documenting that thought process.

Allon Adir, Eli Almog, Laurent Fournier, Eitan Marcus, Michal Rimon, Michael Vinov, Avi Ziv, "Genesys-Pro: Innovations in Test Program Generation for Functional Processor Verification," IEEE Design and Test of Computers, vol. 21, no. 2, pp. 84-93, Mar./Apr. 2004


The first generation of test generators at IBM was developed in the mid-1980s. These generators incorporated a biased, pseudorandom, dynamic generation scheme.6 The need for a generic solution, applicable to any architecture, led to the development of a model-based test generation scheme ...




I wasn't interested in a generic solution. Mine was to be applicable only to SPARC-V9, so I followed reference 6 to:

Aharon, A.; Bar-David, A.; Dorfman, B.; Gofman, E.; Leibowitz, M.; Schwartzburd, V.; , "Verification of the IBM RISC System/6000 by a dynamic biased pseudo-random test program generator," IBM Systems Journal , vol.30, no.4, pp.527-538, 1991


For some designs, such as a floating-point unit, the main part of the verification task can be fulfilled by programs that consist of only one instruction. The generation of such programs is relatively straightforward: The generator (or the user) selects an instruction and the required controls, generates the operands randomly (or provides them), and then invokes a reference model of the design to get the expected results. The generation of multiple instruction test programs is much more complicated ... There are approaches8,10 that present a way to generate multi-instruction test programs ... Such an approach to test generation may be classified as a static one, since the test programs are assembled first and executed afterwards. There is no relation between the intermediate machine states during execution of the test program and the test generation process. In [RISC System/6000] RTPG the test generation is interleaved with the execution of every instruction as soon as it is generated ...




I had decided on the generation of multiple instruction test programs because I wanted to treat the DUV as a black box. That meant that operands would enter the microprocessor from outside (i.e. loaded from memory) and results would emerge from the microprocessor (i.e. stored to memory). This implies a sequence of instructions. Furthermore, my approach is static, since the test programs are assembled first and executed afterwards. There is no relation between the intermediate machine states during execution of the test program and the test generation process, because intermediate machine states are generally not observable in a black box. This led me to references 8 and 10. I ignored reference 8, "A VLSI Design Verification Strategy", because the title suggests an approach not specific to microprocessors. I was left with reference 10:

Bellon, C.; Liothin, A.; Sadier, S.; Saucier, G.; Velazco, R.; Grillot, F.; Issenman, M.; , "Automatic Generation of Microprocessor Test Programs," Design Automation, 1982. 19th Conference on , vol., no., pp. 566- 573, 14-16 June 1982


One can choose a classical test environment, in which the test equipment imposes its timing, sends the inputs in the right order to the tested circuit and observes the outputs ... Contrary to this, the tested microprocessor can be the "master" of the execution of its test program ; i.e., once the memory is loaded with the test program and operands, the microprocessor is restarted and executes the test program in the "normal functioning" mode ... But this approach involves a difficult memory space management problem : test program, test operands and test results must be stored in memory, at the actual address given in the test program ; test addresses can conflict, due to the somewhat scattered values imposed by the test. Such an environment is proposed in [ABR 81] (so-called closed loop environment). We had used such an environment in a previous phase of this study ... Test experiments on faulty microprocessors have proven the efficiency of the behavioral test approach ... We have chosen an intermediate approach, in which the memory management is simplified : the microprocessor reads the memory normally, but can not write in it ...




First, it is natural for the microprocessor to be the master of the execution of its test program. As for the potential for address conflicts, I'll cross that bridge when I come to it. I definitely want to allow the microprocessor to write to memory, again because this is a more natural approach. So I was led to reference [ABR 81]:

J. A. Abraham and K. P. Parker, "Practical microprocessor testing: Open and closed loop approaches," in Proc. IEEE COMPCON, Spring 1981, pp. 308-311.