- Jun 17, 2020
-
-
Kaiwen Xu authored
-
- Jun 15, 2020
-
-
ReinUsesLisp authored
Fix build time issues on gcc. Confirmed through asan that avoiding this initialization is safe.
-
- Jun 12, 2020
-
-
ReinUsesLisp authored
-
ReinUsesLisp authored
Emit code compatible with NV_gpu_program5. This should emit code compatible with Fermi, but it wasn't tested on that architecture. Pascal has some issues not present on Turing GPUs.
-
- Jun 10, 2020
-
-
ReinUsesLisp authored
-
David Marcec authored
Fixes animal crossing svcBreak on launch
-
David Marcec authored
GetTotalPhysicalMemoryAvailableWithoutSystemResource & GetTotalPhysicalMemoryUsedWithoutSystemResource seem to subtract the resource size from the usage.
-
- Jun 09, 2020
-
-
ReinUsesLisp authored
Instead of using as template argument a shared pointer, use the underlying type and manage shared pointers explicitly. This can make removing shared pointers from the cache more easy. While we are at it, make some misc style changes and general improvements (like insert_or_assign instead of operator[] + operator=).
-
- Jun 08, 2020
-
-
ReinUsesLisp authored
Vertex buffers bindings become invalid after the stream buffer is invalidated. We were originally doing this, but it got lost at some point. - Fixes Animal Crossing: New Horizons, but it affects everything.
-
ReinUsesLisp authored
We have to invalidate whatever cache is being used before uploading the data, hence it makes more sense to return this on Map instead of Unmap.
-
ReinUsesLisp authored
Handle blits to images as 2D, even when they have block depth. - Fixes rendering issues on Luigi's Mansion 3
-
ReinUsesLisp authored
-
ReinUsesLisp authored
-
ReinUsesLisp authored
This allows rendering to 3D textures with more than one slice. Applications are allowed to render to more than one slice of a texture using gl_Layer from a VTG shader. This also requires reworking how 3D texture collisions are handled, for now, this commit allows rendering to slices but not to miplevels. When a render target attempts to write to a mipmap, we fallback to the previous implementation (copying or flushing as needed). - Fixes color correction 3D textures on UE4 games (rainbow effects). - Allows Xenoblade games to render to 3D textures directly.
-
- Jun 07, 2020
-
-
ReinUsesLisp authored
The rasterizer cache is no longer used. Each cache has its own generic implementation optimized for the cached data.
-
ReinUsesLisp authored
Trivial port the generic shader cache to Vulkan.
-
ReinUsesLisp authored
Trivially port the generic shader cache to OpenGL.
-
ReinUsesLisp authored
Implement a generic shader cache for fast lookups and invalidations. Invalidations are cheap but expensive when a shader is invalidated. Use two mutexes instead of one to avoid locking invalidations for lookups and vice versa. When a shader has to be removed, lookups are locked as expected.
-
- Jun 06, 2020
-
-
Morph authored
-
ReinUsesLisp authored
Skip fast buffer uploads on Nvidia 443.24 Vulkan beta driver on OpenGL. This driver throws the following error when calling BufferSubData or BufferData on buffers that are candidates for fast constant buffer uploads. This is the equivalens to push constants on Vulkan, except that they can access the full buffer. The error: Unknown internal debug message. The NVIDIA OpenGL driver has encountered an out of memory error. This application might behave inconsistently and fail. If this error persists on future drivers, we might have to look deeper into this issue. For now, we can black list it and log it as a temporary solution.
-
ReinUsesLisp authored
Avoids logging when it's not relevant. This can potentially reduce driver's internal thread overhead.
-
- Jun 05, 2020
-
-
ReinUsesLisp authored
Games using D3D idioms can join images and samplers when a shader executes, instead of baking them into a combined sampler image. This is also possible on Vulkan. One approach to this solution would be to use separate samplers on Vulkan and leave this unimplemented on OpenGL, but we can't do this because there's no consistent way of determining which constant buffer holds a sampler and which one an image. We could in theory find the first bit and if it's in the TIC area, it's an image; but this falls apart when an image or sampler handle use an index of zero. The used approach is to track for a LOP.OR operation (this is done at an IR level, not at an ISA level), track again the constant buffers used as source and store this pair. Then, outside of shader execution, join the sample and image pair with a bitwise or operation. This approach won't work on games that truly use separate samplers in a meaningful way. For example, pooling textures in a 2D array and determining at runtime what sampler to use. This invalidates OpenGL's disk shader cache :) - Used mostly by D3D ports to Switch
-
ReinUsesLisp authored
-
- Jun 04, 2020
-
-
David Marcec authored
clogs logs quite a bit
-
David Marcec authored
-
- Jun 03, 2020
-
-
ReinUsesLisp authored
NV_transform_feedback, NV_transform_feedback2 and ARB_transform_feedback3 with NV_transform_feedback interactions allows implementing transform feedbacks as dynamic state. Maxwell implements transform feedbacks as dynamic state, so using these extensions with TransformFeedbackStreamAttribsNV allows us to properly emulate transform feedbacks without having to recompile shaders when the state changes.
-
David Marcec authored
-
FearlessTobi authored
Co-Authored-By:
xperia64 <xperiancedapps@gmail.com>
-
- Jun 02, 2020
-
-
David Marcec authored
-
ReinUsesLisp authored
Implement atomic operations on images. On GLSL these are atomicImage* functions (e.g. atomicImageAdd).
-
ReinUsesLisp authored
This is the equivalent of an image buffer on OpenGL. - Used by Octopath Traveler
-
ReinUsesLisp authored
- Used by Octopath Traveler
-
- Jun 01, 2020
-
-
Morph authored
gl_shader_decompiler: Declare gl_Layer and gl_ViewportIndex within gl_PerVertex for vertex and tessellation shaders
-
Morph authored
On Intel's proprietary drivers, gl_Layer and gl_ViewportIndex are not allowed members of gl_PerVertex block, causing the shader to fail to compile. Fix this by declaring these variables outside of gl_PerVertex.
-
VolcaEM authored
-
VolcaEM authored
Allows Minecraft: Nintendo Switch Edition (a.k.a. old Minecraft) to boot and go ingame
-
ReinUsesLisp authored
This avoids using Nvidia's ASTC decoder on OpenGL. The last time it was profiled, it was slower than yuzu's decoder. While we are at it, fix a bug in the texture cache when native ASTC is not supported.
-
ReinUsesLisp authored
Avoids compilation errors at the cost of shader build times and runtime performance when a game hits the limit of uniform buffers we can use.
-
- May 31, 2020
-
-
Morph authored
Previously we were disabling compute shaders on Intel's proprietary driver due to broken compute. This has been fixed in the latest Intel drivers. Re-enable compute for Intel proprietary drivers and remove the check for broken compute.
-
- May 30, 2020
-
-
ReinUsesLisp authored
Geometry shaders built from Nvidia's compiler check for bits[16:23] to be less than or equal to 0 with VSETP to default to a "safe" value of 0x8000'0000 (safe from hardware's perspective). To avoid hitting this path in the shader, return 0x00ff'0000 from S2R INVOCATION_INFO. This seems to be the maximum number of vertices a geometry shader can emit in a primitive.
-