iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://en.wikipedia.org/wiki/MAJC
MAJC - Wikipedia Jump to content

MAJC

From Wikipedia, the free encyclopedia
MAJC
DesignerSun Microsystems
Introduced1990s
DesignVLIW

MAJC (Microprocessor Architecture for Java Computing) was a Sun Microsystems multi-core, multithreaded, very long instruction word (VLIW) microprocessor design from the mid-to-late 1990s. Originally called the UltraJava processor, the MAJC processor was targeted at running Java programs, whose "late compiling" allowed Sun to make several favourable design decisions. The processor was released into two commercial graphical cards from Sun. Lessons learned regarding multi-threads on a multi-core processor provided a basis for later OpenSPARC implementations such as the UltraSPARC T1.

Design elements

[edit]

Move instruction scheduling to the compiler

[edit]

Like other VLIW designs, notably Intel's IA-64 (Itanium), MAJC attempted to improve performance by moving several expensive operations out of the processor and into the related compilers. In general, VLIW designs attempt to eliminate the instruction scheduler, which often represents a relatively large amount of the overall processor's transistor budget. With this portion of the CPU removed to software, those transistors can be used for other purposes, often to add additional functional units to process more instructions at once, or to increase the amount of cache memory to reduce the amount of time spent waiting for data to arrive from the much slower main memory. Although MAJC shared these general concepts, it was unlike other VLIW designs, and processors in general, in a number of specific details.

Generalized functional units

[edit]

Most processors include a number of separate "subprocessors" known as functional units that are tuned to operating on a particular type of data. For instance, a modern CPU typically has two or three functional units dedicated to processing integer data and logic instructions, known as ALUs, while other units handle floating-point numbers, the FPUs, or multimedia data, SIMD. MAJC instead used a single multi-purpose functional unit which could process any sort of data. In theory this approach meant that processing any one type of data would take longer, perhaps much longer, than processing the same data in a unit dedicated to that type of data. But on the other hand, these general-purpose units also meant that you did not end up with large portions of the CPU being unused because the program just happened to be doing many (for example) floating point calculations at that particular point in time.

Variable-length instruction packets

[edit]

Another difference is that MAJC allowed for variable-length "instruction packets", which under VLIW contain a number of instructions that the compiler has determined can be run at the same time. Most VLIW architectures use fixed-length packets and when they cannot find an instruction to run they instead fill it with a NOP, which simply takes up space. Although variable-length instruction packets added some complexity to the CPU, it reduced code size and thus the number of expensive cache misses by increasing the amount of code in the cache at any one time.

Avoiding interlocks and stalls

[edit]

The primary difference was the way that the MAJC design required the compiler to avoid interlocks, pauses in execution while the results of one instruction need to be processed for the next to be able to run. For instance, if the processor is fed the instructions C = A + B, E = C + D, then the second instruction can be run only after the first completes. Most processors include locks in the design to stall out and reschedule these sorts of interlocked instructions, allowing some other instructions to run while the value of C is being calculated. However these interlocks are very expensive in terms of chip real-estate, and represents the majority of the instruction scheduler's logic.

In order for the compiler to avoid these interlocks, it would have to know exactly how long each of these instructions would take to complete. For instance, if a particular implementation took three cycles to complete a floating-point multiplication, MAJC compilers would attempt to schedule in other instructions that took three cycles to complete and were not currently stalled. A change in the actual implementation might reduce this delay to only two instructions, however, and the compiler would need to be aware of this change.

This means that the compiler was not tied to MAJC as a whole, but a particular implementation of MAJC, each individual CPU based on the MAJC design. This would normally be a serious logistical problem; consider the number of different variations of the Intel IA-32 design for instance, each one would need its own dedicated compiler and the developer would have to produce a different binary for every one. However it is precisely this concept that drives the Java market–there is indeed a different compiler for each ISA, and it is installed on the client's machine instead of the developer's. The developer ships only a single bytecode version of their program, and the user's machine compiles that to the underlying platform.

In reality, scheduling instructions in this fashion turns out to be a very difficult problem. In real-world use, processors that attempt to do this scheduling at runtime encounter numerous events when the data needed is outside the cache, and there is no other instruction in the program that isn't also dependent on such data. In these cases the processor might stall for long periods, waiting on main memory. The VLIW approach does not help much in this regard; although the compiler might be able to spend more time looking for instructions to run, that doesn't mean that it can actually find one.

MAJC attempted to address this problem through the ability to execute code from other threads if the current thread stalled on memory. Switching threads is normally a very expensive process known as a context switch, and on a normal processor the switch would overwhelm any savings and generally slow the machine down. On MAJC, the system could hold the state for up to four threads in memory at the same time, reducing the context switch to a few instructions in length. This feature has since appeared on other processors; Intel refers to it as HyperThreading.

MAJC took this idea one step further, and tried to prefetch data and instructions needed for threads while they were stalled. Most processors include similar functionality for parts of an instruction stream, known as speculative execution, where the processor runs both of the possible outcomes of a branch while waiting for the deciding variable to calculate. MAJC instead continued to run the thread as if it were not stalled, using this execution to find and then load any data or instructions that would soon be needed when the thread stopped stalling. Sun referred to this as Space-Time Computing (STC), and it is a speculative multithreading design.

Processors up to this point had tried to extract parallelism in a single thread, a technique that was reaching its limits in terms of diminishing returns. In seems that in a general sense the MAJC design attempted to avoid stalls by running across threads (and programs) as opposed to looking for parallelism in a single thread. VLIW is generally expected to be somewhat worse in terms of stalls because it is difficult to understand runtime behavior at compile-time, making the MAJC approach in dealing with this problem particularly interesting.

Implementations

[edit]

Sun built a single model of the MAJC, the two-core MAJC 5200, which was the heart of Sun's XVR-1000 and XVR-4000 workstation graphics boards. However many of the multicore and multithreading design ideas, notably in terms of using multiple threads to reduce stalling delays, worked their way into the Sun SPARC processor line, as well as designs from other companies. Additionally, the MAJC idea of designing the processor to run as many threads as possible, as opposed to instructions, appears to be the basis of the later UltraSPARC T1 (code-named Niagara) design.

See also

[edit]

Further reading

[edit]
[edit]