Using your own SSR implementation will require you to write your own shader. You can either copy one of the existing Spine shaders and extend it by the required functionality, or you can use the shader graph, if you are more into node-based shader editing. If you would like to use shader graph, please search the forum for shader graph
to find existing setups as on this thread as a guidance:
Adding normals to a Shader(Graph)
Or as an alternative, you could even write a surface shader, which will perhaps be easier to write.
Depending on which way you want to go, it will be best to simply search the web for reference implementations.
If you end up with manually writing the shader:
For SSR you typically use the screen space position (the pixel position), you can get the position by adding an input at the fragment shader like this:
float2 screenSpacePos : SV_POSITION
Note that this attribute is generated automatically, you don't transfer this from vertex shader to fragment shader.
If you don't need the reflections to be precicely using the screen buffer but only a similar looking fixed texture, you could quickly just do a very simple implementation like this:
float2 reflectionUVCoords = screenSpacePos / _ScreenParams;
reflectionUVCoords.y *= -1; // simply mirrored vertically
float3 reflectionColor = tex2D(yourReflectionTexture, reflectionUVCoords).rgb;
resultColor += yourSurfaceReflectivity * reflectionColor;
If you want to use the screen buffer precisely, you will need to dig into Unity shader documentation, or utilize existing SSRR deferred rendering post-processing functionality. The latter requires to change your scene setup (to be more 3D / 2.5D like instead of flat) and you need the shaders configured to write to the Z-buffer or have a depth-pass (so that the reflection-ray has something to collide with).
And most importantly: we also wish you a merry Christmas! 🙂