1. 16 Jan, 2020 5 commits
  2. 15 Jan, 2020 2 commits
  3. 14 Jan, 2020 11 commits
  4. 12 Jan, 2020 2 commits
  5. 11 Jan, 2020 2 commits
  6. 10 Jan, 2020 1 commit
  7. 09 Jan, 2020 2 commits
  8. 08 Jan, 2020 3 commits
  9. 07 Jan, 2020 7 commits
  10. 06 Jan, 2020 5 commits
    • bunnei's avatar
      Merge pull request #3276 from ReinUsesLisp/pipeline-reqs · 5be00cba
      bunnei authored
      vk_update_descriptor/vk_renderpass_cache: Add pipeline cache dependencies
      5be00cba
    • bunnei's avatar
      Merge pull request #3278 from ReinUsesLisp/vk-memory-manager · ee9b4a7f
      bunnei authored
      renderer_vulkan: Buffer cache, stream buffer and memory manager changes
      ee9b4a7f
    • ReinUsesLisp's avatar
      vk_renderpass_cache: Initial implementation · 5aeff9af
      ReinUsesLisp authored
      The renderpass cache is used to avoid creating renderpasses on each
      draw. The hashed structure is not currently optimized.
      5aeff9af
    • ReinUsesLisp's avatar
      vk_update_descriptor: Initial implementation · 322d6a03
      ReinUsesLisp authored
      The update descriptor is used to store in flat memory a large chunk of
      staging data used to update descriptor sets through templates. It
      provides a push interface to easily insert descriptors following the
      current pipeline. The order used in the descriptor update template has
      to be implicitly followed. We can catch bugs here using validation
      layers.
      322d6a03
    • ReinUsesLisp's avatar
      vk_stream_buffer/vk_buffer_cache: Avoid halting and use generic cache · 5b01f80a
      ReinUsesLisp authored
      The stream buffer before this commit once it was full (no more bytes to
      write before looping) waiting for all previous operations to finish.
      This was a temporary solution and had a noticeable performance penalty
      in performance (from what a profiler showed).
      
      To avoid this mark with fences usages of the stream buffer and once it
      loops wait for them to be signaled. On average this will never wait.
      Each fence knows where its usage finishes, resulting in a non-paged
      stream buffer.
      
      On the other side, the buffer cache is reimplemented using the generic
      buffer cache. It makes use of the staging buffer pool and the new
      stream buffer.
      5b01f80a