Explain Zero Address Instruction With An Example Of An Essay

Posted on by Terr

See also: Stored-program computer and Universal Turing machine § Stored-program computer

The von Neumann architecture, which is also known as the von Neumann model and Princeton architecture, is a computer architecture based on the 1945 description by the mathematician and physicist John von Neumann and others in the First Draft of a Report on the EDVAC.[1] This describes a design architecture for an electronic digital computer with parts consisting of a processing unit containing an arithmetic logic unit and processor registers; a control unit containing an instruction register and program counter; a memory to store both data and instructions; external mass storage; and input and output mechanisms.[1][2] The meaning has evolved to be any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the von Neumann bottleneck and often limits the performance of the system.[3]

The design of a von Neumann architecture machine is simpler than that of a Harvard architecture machine, which is also a stored-program system but has one dedicated set of address and data buses for reading data from and writing data to memory, and another set of address and data buses for instruction fetching.

A stored-program digital computer is one that keeps its program instructions, as well as its data, in read-write, random-access memory (RAM). Stored-program computers were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches and inserting patch cables to route data and to control signals between various functional units. In the vast majority of modern computers, the same memory is used for both data and program instructions, and the von Neumann vs. Harvard distinction applies to the cache architecture, not the main memory (split cache architecture).

History[edit]

The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot be used as a word processor or a gaming console. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as they were "designed". "Reprogramming", when it was possible at all, was a laborious process, starting with flowcharts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up a program on ENIAC and get it working.[4]

With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, an instruction set and can store in memory a set of instructions (a program) that details the computation.

A stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which had to be done manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing. Self-modifying code has largely fallen out of favor, since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes.

Capabilities[edit]

On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, and other automated programming tools possible. One can "write programs which write programs".[5] This has allowed a sophisticated self-hosting computing ecosystem to flourish around von Neumann architecture machines.

Some high level languages such as LISP leverage the von Neumann architecture by providing an abstract, machine-independent way to manipulate executable code at runtime, or by using runtime information to tune just-in-time compilation (e.g. in the case of languages hosted on the Java virtual machine, or languages embedded in web browsers).

On a smaller scale, some repetitive operations such as BITBLT or pixel & vertex shaders could be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular.

Development of the stored-program concept[edit]

The mathematician Alan Turing, who had been alerted to a problem of mathematical logic by the lectures of Max Newman at the University of Cambridge, wrote a paper in 1936 entitled On Computable Numbers, with an Application to the Entscheidungsproblem, which was published in the Proceedings of the London Mathematical Society.[6] In it he described a hypothetical machine which he called a "universal computing machine", and which is now known as the "Universal Turing machine". The hypothetical machine had an infinite store (memory in today's terminology) that contained both instructions and data. John von Neumann became acquainted with Turing while he was a visiting professor at Cambridge in 1935, and also during Turing's PhD year at the Institute for Advanced Study in Princeton, New Jersey during 1936 – 1937. Whether he knew of Turing's paper of 1936 at that time is not clear.

In 1936, Konrad Zuse also anticipated in two patent applications that machine instructions could be stored in the same storage used for data.[7]

Independently, J. Presper Eckert and John Mauchly, who were developing the ENIAC at the Moore School of Electrical Engineering, at the University of Pennsylvania, wrote about the stored-program concept in December 1943. [8][9] In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metal delay line memory. This was the first time the construction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware of Turing's work.

Von Neumann was involved in the Manhattan Project at the Los Alamos National Laboratory, which required huge amounts of calculation. This drew him to the ENIAC project, during the summer of 1944. There he joined into the ongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, he wrote up a description titled First Draft of a Report on the EDVAC[1] based on the work of Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it with only von Neumann's name on it, to the consternation of Eckert and Mauchly.[10] The paper was read by dozens of von Neumann's colleagues in America and Europe, and influenced the next round of computer designs.

Jack Copeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as 'von Neumann machines'".[11] His Los Alamos colleague Stan Frankel said of von Neumann's regard for Turing's ideas:

I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing's paper of 1936… Von Neumann introduced me to that paper and at his urging I studied it with care. Many people have acclaimed von Neumann as the "father of the computer" (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing— in so far as not anticipated by Babbage… Both Turing and von Neumann, of course, also made substantial contributions to the "reduction to practice" of these concepts but I would not regard these as comparable in importance with the introduction and explication of the concept of a computer able to store in its memory its program of activities and of modifying that program in the course of these activities. [12]

At the time that the "First Draft" report was circulated, Turing was producing a report entitled Proposed Electronic Calculator which described in engineering and programming detail, his idea of a machine that was called the Automatic Computing Engine (ACE).[13] He presented this to the Executive Committee of the British National Physical Laboratory on February 19, 1946. Although Turing knew from his wartime experience at Bletchley Park that what he proposed was feasible, the secrecy surrounding Colossus, that was subsequently maintained for several decades, prevented him from saying so. Various successful implementations of the ACE design were produced.

Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paper achieved greater circulation and the computer architecture it outlined became known as the "von Neumann architecture". In the 1953 publication Faster than Thought: A Symposium on Digital Computing Machines (edited by B. V. Bowden), a section in the chapter on Computers in America reads as follows:[14]

The Machine of the Institute For Advanced Studies, Princeton

In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering in Philadelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers a report on the logical design of digital computers. The report contained a fairly detailed proposal for the design of the machine which has since become known as the E.D.V.A.C. (electronic discrete variable automatic computer). This machine has only recently been completed in America, but the von Neumann report inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) in Cambridge (see page 130).

In 1947, Burks, Goldstine and von Neumann published another report which outlined the design of another type of machine (a parallel machine this time) which should be exceedingly fast, capable perhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructing such a machine was in the development of a suitable memory, all the contents of which were instantaneously accessible, and at first they suggested the use of a special vacuum tube—called the "Selectron"—which had been invented by the Princeton Laboratories of the R.C.A. These tubes were expensive and difficult to make, so von Neumann subsequently decided to build a machine based on the Williams memory. This machine, which was completed in June, 1952 in Princeton has become popularly known as the Maniac. The design of this machine has inspired that of half a dozen or more machines which are now being built in America, all of which are known affectionately as "Johniacs."

In the same book, the first two paragraphs of a chapter on ACE read as follows:[15]

Automatic Computation at the National Physical Laboratory

One of the most modern digital computers which embodies developments and improvements in the technique of automatic electronic computing was recently demonstrated at the National Physical Laboratory, Teddington, where it has been designed and built by a small team of mathematicians and electronics research engineers on the staff of the Laboratory, assisted by a number of production engineers from the English Electric Company, Limited. The equipment so far erected at the Laboratory is only the pilot model of a much larger installation which will be known as the Automatic Computing Engine, but although comparatively small in bulk and containing only about 800 thermionic valves, as can be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine.

The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M. Turing, F.R.S., in a paper1. read before the London Mathematical Society in 1936, but work on such machines in Britain was delayed by the war. In 1945, however, an examination of the problems was made at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of the Mathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists, and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of the special group already mentioned. In April, 1948, the latter became the Electronics Section of the Laboratory, under the charge of Mr. F. M. Colebrook.

Early von Neumann-architecture computers[edit]

The First Draft described a design that was used by many universities and corporations to construct their computers.[16] Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets.

  • ARC2 (Birkbeck, University of London) officially came online on May 12, 1948.[17]
  • Manchester Small-Scale Experimental Machine (SSEM), nicknamed "Baby" (University of Manchester, England) made its first successful run of a stored-program on June 21, 1948.
  • EDSAC (University of Cambridge, England) was the first practical stored-program electronic computer (May 1949)
  • Manchester Mark 1 (University of Manchester, England) Developed from the SSEM (June 1949)
  • CSIRAC (Council for Scientific and Industrial Research) Australia (November 1949)
  • EDVAC (Ballistic Research Laboratory, Computing Laboratory at Aberdeen Proving Ground 1951)
  • ORDVAC (U-Illinois) at Aberdeen Proving Ground, Maryland (completed November 1951)[18]
  • IAS machine at Princeton University (January 1952)
  • MANIAC I at Los Alamos Scientific Laboratory (March 1952)
  • ILLIAC at the University of Illinois, (September 1952)
  • BESM-1 in Moscow (1952)
  • AVIDAC at Argonne National Laboratory (1953)
  • ORACLE at Oak Ridge National Laboratory (June 1953)
  • BESK in Stockholm (1953)
  • JOHNNIAC at RAND Corporation (January 1954)
  • DASK in Denmark (1955)
  • WEIZAC at the Weizmann Institute of Science in Rehovot, Israel (1955)
  • PERM in Munich (1956?)
  • SILLIAC in Sydney (1956)

Early stored-program computers[edit]

The date information in the following chronology is difficult to put into proper order. Some dates are for first running a test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation.

  • The IBM SSEC had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948. This ability was claimed in a US patent.[19][20] However it was partially electromechanical, not fully electronic. In practice, instructions were read from paper tape due to its limited memory.[21]
  • The ARC2 developed by Andrew Booth and Kathleen Booth at Birkbeck, University of London officially came online on May 12, 1948.[17] It featured the first rotating drum storage device.[22][23]
  • The ManchesterSSEM (the Baby) was the first fully electronic computer to run a stored program. It ran a factoring program for 52 minutes on June 21, 1948, after running a simple division program and a program to show that two numbers were relatively prime.
  • The ENIAC was modified to run as a primitive read-only stored-program computer (using the Function Tables for program ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine for von Neumann.
  • The BINAC ran some test programs in February, March, and April 1949, although was not completed until September 1949.
  • The Manchester Mark 1 developed from the SSEM project. An intermediate version of the Mark 1 was available to run programs in April 1949, but was not completed until October 1949.
  • The EDSAC ran its first program on May 6, 1949.
  • The EDVAC was delivered in August 1949, but it had problems that kept it from being put into regular operation until 1951.
  • The CSIR Mk I ran its first program in November 1949.
  • The SEAC was demonstrated in April 1950.
  • The Pilot ACE ran its first program on May 10, 1950 and was demonstrated in December 1950.
  • The SWAC was completed in July 1950.
  • The Whirlwind was completed in December 1950 and was in actual use in April 1951.
  • The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950.

Evolution[edit]

Through the decades of the 1960s and 1970s computers generally became both smaller and faster, which led to some evolutions in their architecture. For example, memory-mapped I/O allows input and output devices to be treated the same as memory.[24] A single system bus could be used to provide a modular system with lower cost[clarification needed]. This is sometimes called a "streamlining" of the architecture.[25] In subsequent decades, simple microcontrollers would sometimes omit features of the model to lower cost and size. Larger computers added features for higher performance.

Design limitations[edit]

Von Neumann bottleneck[edit]

The shared bus between the program memory and data memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the central processing unit (CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to be transferred to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every newer generation of CPU.

The von Neumann bottleneck was described by John Backus in his 1977 ACM Turing Award lecture. According to Backus:

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.[26][27]

Mitigations[edit]

There are several known methods for mitigating the Von Neumann performance bottleneck. For example, the following all can improve performance[why?]:

The problem can also be sidestepped somewhat by using parallel computing, using for example the non-uniform memory access (NUMA) architecture—this approach is commonly employed by supercomputers. It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence.[citation needed] Modern functional programming and object-oriented programming are much less geared towards "pushing vast numbers of words back and forth" than earlier languages like FORTRAN were, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers.

As of 1996, a database benchmark study found that three out of four CPU cycles were spent waiting for memory. Researchers expect that increasing the number of simultaneous instruction streams with multithreading or single-chip multiprocessing will make this bottleneck even worse.[28]

Self-modifying code[edit]

Aside from the von Neumann bottleneck, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a computer crash. Memory protection and other forms of access control can usually protect against both accidental and malicious program modification.

Program modifications can be beneficial. The Von Neumann architecture allows for encryption.[clarification needed]

See also[edit]

References[edit]

Further reading[edit]

  • Bowden, B. V., ed. (1953), Faster Than Thought: A Symposium on Digital Computing Machines, London: Sir Isaac Pitman and Sons Ltd. 
  • Rojas, Raúl; Hashagen, Ulf, eds. (2000), The First Computers: History and Architectures, MIT Press, ISBN 0-262-18197-5 
  • Davis, Martin (2000), The universal computer: the road from Leibniz to Turing, New York: W. W. Norton & Company Inc., ISBN 0-393-04785-7  republished as: Davis, Martin (2001), Engines of Logic: Mathematicians and the Origin of the Computer, New York: W. W. Norton & Company, ISBN 978-0-393-32229-3 
  • Can Programming be Liberated from the von Neumann Style?. Backus, John. 1977 ACM Turing Award Lecture. Communications of the ACM, August 1978, Volume 21, Number 8 Online PDF see details at http://www.cs.tufts.edu/~nr/backus-lecture.html
  • Bell, C. Gordon; Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. Massive (668 pages)
  • Copeland, Jack (2006), "Colossus and the Rise of the Modern Computer", in Copeland, B. Jack, Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4 
  • Ganesan, Deepak (2009), The von Neumann Model(PDF), retrieved 2011-10-22 
  • McCartney, Scott (1999). ENIAC: The Triumphs and Tragedies of the World's First Computer. Walker & Co. ISBN 0-8027-1348-3. 
  • Goldstine, Herman H. (1972). The Computer from Pascal to von Neumann. Princeton University Press. ISBN 0-691-08104-2. 
  • Shurkin, Joel (1984). Engines of the Mind: A history of the Computer. New York, London: W. W. Norton & Company. ISBN 0-393-01804-0. 

External links[edit]

Von Neumann architecture scheme
  1. ^ abcvon Neumann, John (1945), First Draft of a Report on the EDVAC(PDF), archived from the original(PDF) on 2013-03-14, retrieved 2011-08-24 
  2. ^Ganesan 2009
  3. ^Markgraf, Joey D. (2007), The Von Neumann Bottleneck, archived from the original on December 12, 2013 
  4. ^Copeland 2006, p. 104
  5. ^MFTL (My Favorite Toy Language) entry Jargon File 4.4.7, retrieved 2008-07-11 
  6. ^Turing, Alan M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 (published 1937), 42, pp. 230–265, doi:10.1112/plms/s2-42.1.230  (and Turing, Alan M. (1938), "On Computable Numbers, with an Application to the Entscheidungsproblem. A correction", Proceedings of the London Mathematical Society, 2 (published 1937), 43 (6), pp. 544–546, doi:10.1112/plms/s2-43.6.544 )
  7. ^"Electronic Digital Computers", Nature, 162: 487, September 25, 1948, doi:10.1038/162487a0, archived from the original on April 6, 2009, retrieved April 10, 2009 
  8. ^Lukoff, Herman (1979). From Dits to Bits: A personal history of the electronic computer. Portland, Oregon, USA: Robotics Press. ISBN 0-89661-002-0. LCCN 79-90567. 
  9. ^ENIAC project administrator Grist Brainerd's December 1943 progress report for the first period of the ENIAC's development implicitly proposed the stored program concept (while simultaneously rejecting its implementation in the ENIAC) by stating that "in order to have the simplest project and not to complicate matters" the ENIAC would be constructed without any "automatic regulation".
  10. ^Copeland 2006, p. 113
  11. ^Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC, retrieved 2010-01-27 
  12. ^Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC, retrieved 2010-01-27  which cites Randell, Brian (1972), Meltzer, B.; Michie, D., eds., "On Alan Turing and the Origins of Digital Computers", Machine Intelligence, Edinburgh: Edinburgh University Press, 7: 10, ISBN 0-902383-26-4 
  13. ^Copeland 2006, pp. 108–111
  14. ^Bowden 1953, pp. 176,177
  15. ^Bowden 1953, p. 135
  16. ^"Electronic Computer Project". Institute for Advanced Study. Retrieved 2011-05-26. 
  17. ^ abCampbell-Kelly, Martin (April 1982). "The Development of Computer Programming in Britain (1945 to 1955)". IEEE Annals of the History of Computing. 4 (2): 121–139. doi:10.1109/MAHC.1982.10016. 
  18. ^Robertson, James E. (1955), Illiac Design Techniques, report number UIUCDCS-R-1955-146, Digital Computer Laboratory, University of Illinois at Urbana-Champaign 
  19. ^Selective Sequence Electronic Calculator (USPTO Web site)
  20. ^Selective Sequence Electronic Calculator (Google Patents)
  21. ^Grosch, Herbert R. J. (1991), Computer: Bit Slices From a Life, Third Millennium Books, ISBN 0-88733-085-1 
  22. ^Lavington, Simon, ed. (2012). Alan Turing and his Contemporaries: Building the World's First Computers. London: British Computer Society. p. 61. ISBN 9781906124908. 
  23. ^Johnson, Roger (April 2008). "School of Computer Science & Information Systems: A Short History"(PDF). Birkbeck College. University of London. Retrieved 2017-07-23. 
  24. ^Bell, C. Gordon; Cady, R.; McFarland, H.; O'Laughlin, J.; Noonan, R.; Wulf, W. (1970), "A New Architecture for Mini-Computers—The DEC PDP-11"(PDF), Spring Joint Computer Conference, pp. 657–675 
  25. ^Null, Linda; Lobur, Julia (2010), The essentials of computer organization and architecture (3rd ed.), Jones & Bartlett Learning, pp. 36, 199–203, ISBN 978-1-4496-0006-8 
  26. ^Backus, John W. "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs". doi:10.1145/359576.359579. 
  27. ^Dijkstra, Edsger W."E. W. Dijkstra Archive: A review of the 1977 Turing Award Lecture". Retrieved 2008-07-11. 
  28. ^Sites, Richard L.; Patt, Yale. "Architects Look to Processors of Future". Microprocessor report. 1996

Not to be confused with 1-bit.

A one instruction set computer (OISC), sometimes called an ultimate reduced instruction set computer (URISC), is an abstract machine that uses only one instruction – obviating the need for a machine languageopcode.[1][2][3] With a judicious choice for the single instruction and given infinite resources, an OISC is capable of being a universal computer in the same manner as traditional computers that have multiple instructions.[2]:55 OISCs have been recommended as aids in teaching computer architecture[1]:327[2]:2 and have been used as computational models in structural computing research.[3]

Machine architecture[edit]

In a Turing-complete model, each memory location can store an arbitrary integer, and – depending on the model[clarification needed] – there may be arbitrarily many locations. The instructions themselves reside in memory as a sequence of such integers.

There exists a class of universal computers with a single instruction based on bit manipulation such as bit copying or bit inversion. Since their memory model is finite, as is the memory structure used in real computers, those bit manipulation machines are equivalent to real computers rather than to Turing machines.[4]

Currently known OISCs can be roughly separated into three broad categories:

  • Bit-manipulating machines
  • Transport Triggered Architecture machines
  • Arithmetic-based Turing-complete machines

Bit-manipulating machines[edit]

Bit-manipulating machines are the simplest class.[clarification needed]

BitBitJump[edit]

A bit copying machine, called BitBitJump, copies one bit in memory and passes the execution unconditionally to the address specified by one of the operands of the instruction. This process turns out to be capable of universal computation (i.e. being able to execute any algorithm and to interpret any other universal machine) because copying bits can conditionally modify the code that will be subsequently executed.

Toga computer[edit]

Another machine, called the Toga computer, inverts a bit and passes the execution conditionally depending on the result of inversion.

This section needs expansion. You can help by adding to it.(December 2016)

Multi-bit copying machine[edit]

Yet another bit operating machine,[which?] similar to BitBitJump, copies several bits at the same time. The problem of computational universality is solved in this case by keeping predefined jump tables in the memory.[clarification needed]

Transport Triggered Architecture[edit]

Transport Triggered Architecture (TTA) is a design in which computation is a side effect of data transport. Usually, some memory registers (triggering ports) within common address space perform an assigned operation when the instruction references them. For example, in an OISC using a single memory-to-memory copy instruction, this is done by triggering ports that perform arithmetic and instruction pointer jumps when written to.

Arithmetic-based Turing-complete machines[edit]

Arithmetic-based Turing-complete machines use an arithmetic operation and a conditional jump. Like the two previous universal computers, this class is also Turing-complete. The instruction operates on integers which may also be addresses in memory.

Currently there are several known OISCs of this class, based on different arithmetic operations:

  • addition (addleq, add and branch if less than or equal to zero)[5]
  • decrement (DJN, decrement and branch (jump) if nonzero)[6]
  • increment (P1eq, plus 1 and branch if equal to another value)[7]
  • subtraction (subleq, subtract and branch if less than or equal)[8][9]

Instruction types[edit]

Common choices for the single instruction are:

Only one of these instructions is used in a given implementation. Hence, there is no need for an opcode to identify which instruction to execute; the choice of instruction is inherent in the design of the machine, and an OISC is typically named after the instruction it uses (e.g., an SBN OISC,[2]:41 the SUBLEQ language,[3]:4 etc.). Each of the above instructions can be used to construct a Turing-complete OISC.

This article presents only subtraction-based instructions among those that are not transport triggered. However, it is possible to construct Turing complete machines using an instruction based on other arithmetic operations, e.g., addition. For example, one variation known as DLN (Decrement and jump if not zero) has only two operands and uses decrement as the base operation. For more information see Subleq derivative languages [1].

Subtract and branch if not equal to zero[edit]

The instruction ("Subtract and Branch if Not equal to Zero") subtracts the contents at address a from the contents at address b, stores the result at address c, and then, if the result is not 0, transfers control to address d (if the result is equal zero, execution proceeds to the next instruction in sequence).[3]

Subtract and branch if less than or equal to zero[edit]

The subleq instruction ("SUbtract and Branch if Less than or EQual to zero") subtracts the contents at address a from the contents at address b, stores the result at address b, and then, if the result is not positive, transfers control to address c (if the result is positive, execution proceeds to the next instruction in sequence).[3]:4–7

Pseudocode:

subleqa,b,c; Mem[b] = Mem[b] - Mem[a]; if (Mem[b] ≤ 0) goto c

Conditional branching can be suppressed by setting the third operand equal to the address of the next instruction in sequence. If the third operand is not written, this suppression is implied.

A variant is also possible with two operands and an internal accumulator, where the accumulator is subtracted from the memory location specified by the first operand. The result is stored in both the accumulator and the memory location, and the second operand specifies the branch address:

subleq2a,b; Mem[a] = Mem[a] - ACCUM; ACCUM = Mem[a]; if (Mem[a] ≤ 0) goto b

Although this uses only two (instead of three) operands per instruction, correspondingly more instructions are then needed to effect various logical operations.

Synthesized instructions[edit]

It is possible to synthesize many types of higher-order instructions using only the subleq instruction.[3]:9–10

Unconditional branch:

JMP c 

Addition can be performed by repeated subtraction, with no conditional branching; e.g., the following instructions result in the content at location a being added to the content at location b:

ADD a, b 
subleqa,ZsubleqZ,bsubleqZ,Z

The first instruction subtracts the content at location a from the content at location Z (which is 0) and stores the result (which is the negative of the content at a) in location Z. The second instruction subtracts this result from b, storing in b this difference (which is now the sum of the contents originally at a and b); the third instruction restores the value 0 to Z.

A copy instruction can be implemented similarly; e.g., the following instructions result in the content at location b getting replaced by the content at location a, again assuming the content at location Z is maintained as 0:

MOV a, b 
subleqb,bsubleqa,ZsubleqZ,bsubleqZ,Z

Any desired arithmetic test can be built. For example, a branch-if-zero condition can be assembled from the following instructions:

BEQ b, c 
subleqb,Z,L1subleqZ,Z,OUTL1:subleqZ,ZsubleqZ,b,cOUT:...

Subleq2 can also be used to synthesize higher-order instructions, although it generally requires more operations for a given task. For example, no fewer than 10 subleq2 instructions are required to flip all the bits in a given byte:

NOT a 
subleq2tmp; tmp = 0 (tmp = temporary register)subleq2tmpsubleq2minus_one; acc = -1subleq2a; a' = a + 1subleq2Z; Z = - a - 1subleq2tmp; tmp = a + 1subleq2a; a' = 0subleq2tmp; load tmp into accsubleq2a; a' = - a - 1 ( = ~a )subleq2Z; set Z back to 0

Emulation[edit]

The following program (written in pseudocode) emulates the execution of a subleq-based OISC:

intmemory[],program_counter,a,b,cprogram_counter=0while(program_counter>=0):a=memory[program_counter]b=memory[program_counter+1]c=memory[program_counter+2]if(a<0orb<0):program_counter=-1else:memory[b]=memory[b]-memory[a]if(memory[b]>0):program_counter+=3else:program_counter=c

This program assumes that memory[] is indexed by nonnegative integers. Consequently, for a subleq instruction (a, b, c), the program interprets a < 0, b < 0, or an executed branch to c < 0 as a halting condition. Similar interpreters written in a subleq-based language (i.e., self-interpreters, which may use self-modifying code as allowed by the nature of the subleq instruction) can be found in the external links below.

Compilation[edit]

There is a compiler called Higher Subleq written by Oleg Mazonka that compiles a simplified C program into subleq code.[10]

Subtract and branch if negative[edit]

The subneg instruction ("SUbtract and Branch if NEGative"), also called SBN, is defined similarly to subleq:[2]:41,51–52

subnega,b,c; Mem[b] = Mem[b] - Mem[a]; if (Mem[b] < 0) goto c

Conditional branching can be suppressed by setting the third operand equal to the address of the next instruction in sequence. If the third operand is not written, this suppression is implied.

Synthesized instructions[edit]

It is possible to synthesize many types of higher-order instructions using only the subneg instruction. For simplicity, only one synthesized instruction is shown here to illustrate the difference between subleq and subneg.

Unconditional branch:[2]:88–89

JMP c
subnegPOS,Z,c...c:subnegZ,Z

where Z and POS are locations previously set to contain 0 and a positive integer, respectively;

Unconditional branching is assured only if Z initially contains 0 (or a value less than the integer stored in POS). A follow-up instruction is required to clear Z after the branching, assuming that the content of Z must be maintained as 0.

A variant is also possible with four operands - subneg4. The reversal of minuend and subtrahend eases implementation in hardware. The non-destructive result simplifies the synthetic instructions.

subneg4s,m,r,j; subtrahend, minuend, result and jump addresses; Mem[r] = Mem[m] - Mem[s]; if (Mem[r] < 0) goto j

Reverse subtract and skip if borrow[edit]

In a Reverse Subtract and Skip if Borrow (RSSB) instruction, the accumulator is subtracted from the memory location and the next instruction is skipped if there was a borrow (memory location was smaller than the accumulator). The result is stored in both the accumulator and the memory location. The program counter is mapped to memory location 0. The accumulator is mapped to memory location 1.[2]

Example[edit]

To set x to the value of y minus z:

# First, move z to the destination location x.RSSBtemp# Three instructions required to clear acc, temp [See Note 1]RSSBtempRSSBtempRSSBx# Two instructions clear acc, x, since acc is already clearRSSBxRSSBy# Load y into acc: no borrowRSSBtemp# Store -y into acc, temp: always borrow and skipRSSBtemp# SkippedRSSBx# Store y into x, acc# Second, perform the operation.RSSBtemp# Three instructions required to clear acc, tempRSSBtempRSSBtempRSSBz# Load zRSSBx# x = y - z [See Note 2]

[Note 1] If the value stored at "temp" is initially a negative value and the instruction that executed right before the first "RSSB temp" in this routine borrowed, then four "RSSB temp" instructions will be required for the routine to work.

[Note 2] If the value stored at "z" is initially a negative value then the final "RSSB x" will be skipped and thus the routine will not work.

Transport triggered architecture[edit]

Main article: transport triggered architecture

A transport triggered architecture uses only the instruction, hence it was originally called a "move machine". This instruction moves the contents of one memory location to another memory location:[2]:42[11]

move a to b ; Mem[b] := Mem[a]

sometimes written as:

a -> b ; Mem[b] := Mem[a]

Arithmetic is performed using a memory-mapped arithmetic logic unit (ALU) and jumps are performed using a memory-mapped program counter (PC).

A commercial transport triggered architecture microcontroller has been produced called MAXQ, which hides the apparent inconvenience of an OISC by using a "transfer map" that represents all possible destinations for the instructions.[12]

Cryptoleq[edit]

Cryptoleq[13] is a language consisting of one, the eponymous, instruction, is capable of performing general-purpose computation on encrypted programs and is a close relative to Subleq. Cryptoleq works on continuous cells of memory using direct and indirect addressing, and performs two operations O1and O2on three values A, B, and C:

Cryptoleq a, b, c [b] = O1([a],[b]) ; IP = c, if O2[b] ≤ 0 IP = IP + 3, otherwise

where a, b and c are addressed by the instruction pointer, IP, with the value of IP addressing a, IP + 1 point to b and IP + 2 to c.

In Cryptoleq operations O1 and O2 are defined as follows:

The main difference with Subleq is that in Subleq, O1(x,y) simply subtracts y from x and O2(x) equals to x. Cryptoleq is also homomorphic to Subleq, modular inversion and multiplication is homomorphic to subtraction and the operation of O2 corresponds the Subleq test if the values were unencrypted. A program written in Subleq can run on a Cryptoleq machine, meaning backwards compatibility. Cryptoleq though, implements fully homomorphic calculations and since the model is be able to do multiplications. Multiplication on an encrypted domain is assisted by a unique function G that is assumed to be difficult to reverse engineer and allows re-encryption of a value based on the O2 operation:

where is the re-encrypted value of y and is encrypted zero. x is the encrypted value of a variable, let it be m, and equals .

The multiplication algorithm is based on addition and subtraction, uses the function G and does not have conditional jumps nor branches. Cryptoleq encryption is based on Paillier cryptosystem.

See also[edit]

References[edit]

External links[edit]

Cryptoleq Processor made at NYU Abu Dhabi
  1. ^ abMavaddat, F.; Parhami, B. (October 1988). "URISC: The Ultimate Reduced Instruction Set Computer"(PDF). Int'l J. Electrical Engineering Education. Manchester University Press. 25 (4): 327–334. Retrieved 2010-10-04.  This paper considers "a machine with a single 3-address instruction as the ultimate in RISC design (URISC)". Without giving a name to the instruction, it describes a SBN OISC and its associated assembly language, emphasising that this is a universal (i.e., Turing-complete) machine whose simplicity makes it ideal for classroom use.
  2. ^ abcdefghGilreath, William F.; Laplante, Phillip A. (2003). Computer Architecture: A Minimalist Perspective. Springer Science+Business Media. ISBN 978-1-4020-7416-5. Archived from the original on 2009-06-13.  Intended for researchers, computer system engineers, computational theorists and students, this book provides an in-depth examination of various OISCs, including SBN and MOVE. It attributes SBN to W. L. van der Poel (1956).
  3. ^ abcdefNürnberg, Peter J.; Wiil, Uffe K.; Hicks, David L. (September 2003), "A Grand Unified Theory for Structural Computing", Metainformatics: International Symposium, MIS 2003, Graz, Austria: Springer Science+Business Media, pp. 1–16, ISBN 978-3-540-22010-7  This research paper focusses entirely on a SUBLEQ OISC and its associated assembly language, using the name SUBLEQ for "both the instruction and any language based upon it".
  4. ^Oleg Mazonka, "Bit Copying: The Ultimate Computational Simplicity", Complex Systems Journal 2011, Vol 19, N3, pp. 263–285
  5. ^"Addleq". Esolang Wiki. Retrieved 2017-09-16. 
  6. ^"DJN OISC". Esolang Wiki. Retrieved 2017-09-16. 
  7. ^"P1eq". Esolang Wiki. Retrieved 2017-09-16. 
  8. ^Mazonka, Oleg (October 2009). "SUBLEQ". Archived from the original on 2017-06-29. Retrieved 2017-09-16. 
  9. ^"Subleq". Esolang Wiki. Retrieved 2017-09-16. 
  10. ^Oleg Mazonka A Simple Multi-Processor Computer Based on Subleq
  11. ^Jones, Douglas W. (June 1988). "The Ultimate RISC". ACM SIGARCH Computer Architecture News. New York: ACM. 16 (3): 48–55. doi:10.1145/48675.48683. Retrieved 2010-10-04.  "Reduced instruction set computer architectures have attracted considerable interest since 1980. The ultimate RISC architecture presented here is an extreme yet simple illustration of such an architecture. It has only one instruction, move memory to memory, yet it is useful."
  12. ^Catsoulis, John (2005), Designing embedded hardware (2 ed.), O'Reilly Media, pp. 327–333, ISBN 978-0-596-00755-3 
  13. ^Mazonka, Oleg; Tsoutsos, Nektarios Georgios; Maniatakos, Michail (2016), Cryptoleq: A Heterogeneous Abstract Machine for Encrypted and Unencrypted Computation 
Categories: 1

0 Replies to “Explain Zero Address Instruction With An Example Of An Essay”

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *