Age | Commit message (Collapse) | Author |
|
|
|
immediate field
This type of checking should be expanded to cover more instructions...
|
|
|
|
|
|
These are the defects found and fixed so far. Several more have
been observed; I'm working on them.
- Fixed an error in spe_load_uint() that caused incorrect values to be
loaded if the given unsigned value had the low 18 bits as 0,
and that caused inefficient code to be emitted if the given value
had the high 14 bits as 0.
- Fixed a problem in stencil code generation where optional registers
weren't tracked correctly.
- Fixed a problem that the stencil function NEVER was acting as ALWAYS.
- Fixed several problems that could occur if stenciling were enabled but
depth was disabled.
- Fixed a problem with two-sided stencil writemask handling that could
cause a stencil writemask to not be applied.
- Fixed several state permutations that were incorrectly flagged as
not requiring stencil values to be calculated.
|
|
Conflicts:
src/gallium/auxiliary/gallivm/instructionssoa.cpp
src/gallium/auxiliary/gallivm/soabuiltins.c
src/gallium/auxiliary/rtasm/rtasm_x86sse.c
src/gallium/auxiliary/rtasm/rtasm_x86sse.h
src/mesa/main/texenvprogram.c
src/mesa/shader/arbprogparse.c
src/mesa/shader/prog_statevars.c
src/mesa/state_tracker/st_draw.c
src/mesa/vbo/vbo_exec_draw.c
|
|
|
|
|
|
|
|
Used for SIN, COS, EXP2, LOG2, POW instructions. TEX next.
Fixed some bugs in MIN, MAX, DP3, DP4, DPH instructions.
In rtasm code:
Special-case spe_lqd(), spe_stqd() functions so they take byte offsets but
low-order 4 bits are shifted out. This makes things consistant with SPU
assembly language conventions.
Added spe_get_registers_used() function.
|
|
Don't use register qualifier. Doxygen-ize comments. Remove 'extern'.
|
|
|
|
|
|
|
|
|
|
git+ssh://marcheu@git.freedesktop.org/git/mesa/mesa into gallium-0.2
|
|
Notably, gears doesn't.
|
|
|
|
|
|
Besides meaning x86 and x86-64 architecture, it also depends on SSE2
support enabled on gcc.
This fixes the linux-debug build.
|
|
The assertion failed when we ran out of exec memory.
Found with conform texcombine test.
|
|
|
|
|
|
|
|
This set of code changes are for stencil code generation
support. Both one-sided and two-sided stenciling are supported.
In addition to the raw code generation changes, these changes had
to be made elsewhere in the system:
- Added new "register set" feature to the SPE assembly generation.
A "register set" is a way to allocate multiple registers and free
them all at the same time, delegating register allocation management
to the spe_function unit. It's quite useful in complex register
allocation schemes (like stenciling).
- Added and improved SPE macro calculations.
These are operations between registers and unsigned integer
immediates. In many cases, the calculation can be performed
with a single instruction; the macros will generate the
single instruction if possible, or generate a register load
and register-to-register operation if not. These macro
functions are: spe_load_uint() (which has new ways to
load a value in a single instruction), spe_and_uint(),
spe_xor_uint(), spe_compare_equal_uint(), and spe_compare_greater_uint().
- Added facing to fragment generation. While rendering, the rasterizer
needs to be able to determine front- and back-facing fragments, in order
to correctly apply two-sided stencil. That requires these changes:
- Added front_winding field to the cell_command_render block, so that
the state tracker could communicate to the rasterizer what it
considered to be the front-facing direction.
- Added fragment facing as an input to the fragment function.
- Calculated facing is passed during emit_quad().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
compile again.
|
|
|
|
- Use a lookup table for log2.
- Compute (float) (1 << ipart) by tweaking with the exponent directly to
avoid integer overflow and float conversion.
- Also table negative exponents to avoid float division and branching.
- Implement util_fast_exp as function of util_fast_exp2.
|
|
|
|
Special care must be taken when calling compiler generated SSE2 functions
from the runtime generated SSE2: saving the xmm registers, and notify gcc
the stack is not 16byte aligned.
It would be more efficient to keep the stack pointer 16byte aligned, but
too hairy, and not consistent in all x86 architectures.
This has been tested in linux x86 and windows x86 userspace. Not tested on
x86-64 because it is broken for other reasons (even without this change).
|
|
|
|
|
|
|