This is one of the most interesting topics to me.
What follows is almost a copy paste of what I wrote on a Zoom Livetrak forum, to someone who asked the same question. And since I am no expert with Logic Pro itself, I am glad to step in here, since this is a universal topic. I have a very empiric approach, probably not good for everyone and for any kind of project, but gladly sharing it, if it inspires anyone. It works for me.
*I think 3D.*
I first visualize what would the band look like on a stage, *relative to the listener.* (imagine a big band)
I literally mentally position each instrument in that 3D space, left-right, depth (front stage/ back of the stage), and even their height from the floor. Then I translate this 3D image as follow :
1) Panning addresses the left/right the instrument's position.
2) Dry/Wet on reverb/delays addresses the instrument's depth.
3) EQ takes addresses the instrument's height - while obviously trying to harness each instrument's dominant frequencies.
Notice I give little importance to volume, for those 3 steps usually do most of the job, and then I would slightly tweak with volume, but that's the last resort, especially that - as David also mentioned - the extreme importance of orchestration makes the difference. Each part has to have room, and each other instrument may contribute to that room by being silent when wisdom calls for it.
If you think of it, very similar principles are applied by good photographers. Placing elements in the frame, playing with the depth of field, accentuating colour frequencies, to attract the eye where you want.
You do the same with the listener's ears.
Good luck!