We don't need to be calling type node conformity checking from the tight
loop of the renderjob. Hoist that into the private header and use that
intead through via the Class pointer.
Like the previous change, this uses GdkArrayImpl instead of GArray for
tracking modelview changes. This is less important than clip tracking
simple due to being used less, but it keeps the implementation synchronous
with the Clip tracking code.
We can end up spending a lot of time in g_array_maybe_expand() through the
use of g_array_set_size() for clip tracking. That is somewhat due to the
simple nature of GArray being size-dynamic. Instead, we can use
GdkArrayImpl and let the compiler do what it does best to elide some
work and hoist other work into the calling function.
This also fixes a potential UAF in gsk_gl_render_job_push_contained_clip().
The source uniform may or may not point
to a glyph atlas. The optimization we do
for color nodes is only possible if it does,
so check this.
Fixes: #6094
Take a rendernode as source and a GskPath and GskStroke,
and fill the area that is covered when stroking the path
with the given stroke parameters, like cairo_stroke() would.
Add a bunch of inline functions for graphene_rectangle_t.
We use those quite extensively in tight loops so making them as fast as
possible via inlining has massive benefits.
The current render-heavy benchmark I am playing (th paris-30k in node-editor)
went from 49fps to 85fps on my AMD.
When the command queue is out of batches, there is
no point in doing further work like allocating uniforms.
This helps us avoid assertions in the uniform code
that we would hit when we run out of uniform space
too.
Pass the GLsync object from texture into our
command queue, and when executing the queue,
wait on the sync object the first time we
use its associated texture.
When adding mask nodes, I overlooked that
we have two separate functions for determining
what transforms a node supports without offlines.
Since we claim that mask nodes support general
transform, they must certainly support 2d transforms
as well.
When the GL texture already has a mipmap, we don't
have to download and reupload it to generate one.
We differentiate the handling for texture scale nodes,
where we do want to force the mipmap creation even if
it requires us to reupload the GL texture, and plain
texture nodes, where we just take advantage of a
preexisting mipmap to allow trilinear filtering for
downscaling, or create one if we have to upload the
texture anyway.
Store texture coordinates for each slice
instead of assuming 0,0,1,1, and generate
overlapping slices to allow for proper mipmaps.
This almost fixes trilinear filtering with
sliced textures.
Instead of uploading a texture once per filter, ensure textures are
uploaded as little as possible and use samplers instead to switch
different filters.
Sometimes we have to reupload a texture unfortunately, when it is an
external one and we want to create mipmaps.
Use the same approach and only create an offscreen
that is big enough for the clipped part of the scaled
texture.
If the clipped part is still too large for a single
texture, we give up and just render the texture without
filters (using the regular texture rendering code path
which supports slicing).
The following commit will add the texture-scale-magnify-10000x
test which fails without this fix.