How you can react to a mouse click/touch depends on what framework/engine you use. E.g. are you using straight DOM events, something pixi provides, something else? The respective docs of whatever you use should tell you how to react to such an event. That's not something we can help with.
Once you are inside the function that reacts to the mouse/touch event, you can then playback an animation on the skeleton. In our spine-ts runtime, that looks like this, provided you have a skeleton an and animation state:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.1/spine-ts/spine-webgl/example/barebones.html#L46
pixi-spine exposes these objects to you, so you use the same API there.
If you want to drag bones around based on user interaction, then you can find an example how to do that here:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.1/spine-ts/spine-webgl/example/bone-dragging.html
This is based on our spine-ts runtime, using the WebGL backend to render things onto a HTML canvas. You can apply pretty much all of this to pixi-spine as well, the only difference is where you get the mouse/touch event from and possibly what coordinate system you are using. pixi-spine is using and exposing our spine-ts runtime as well. So does Phaser. That means the core APIs like Skeleton, or AnimationState are the same in all of them. What changes is how rendering is done, and how you get user input etc. That's something we can not help with, as that is Phaser/PixiJS specific.