-
Notifications
You must be signed in to change notification settings - Fork 212
Description
Hi,
I have finished hooking up direct effect processing in my engine, with support for occlusion and transmission. This is working well so far.
Now I'm looking into adding support for reflection and/or reverb, but I'm a bit confusing regarding a few points. After reading the discussion on this issue I understand that reflection should be processed separately. However there are a few points that are still confusing to me, so I wouldn't mind some guidance:
Documentation clarification ?
In the documentation, we can read the following:
Reflection simulation models sound reflecting from the source to the listener. You can also use it to model reverberation within the listener’s space (i.e., independent of any sources) by placing the source at the listener position. This lets you model smoothly-varying, physics-based reverb, with a CPU usage cost that is independent of the number of sources in your scene.
I'm interested in doing/trying that, however the part where I'm confused is regarding what should be inside the input buffer we provide to the function iplReflectionEffectApply().
For direct effect I stream a small chunk of samples (and manually advance an index each tick), then once the direct effect is applied I mix that down into a single buffer which then gets played by the audio engine.
How different is the reflection process here ? Should I stream all the nearby sounds and mix them into a single input, then call iplReflectionEffectApply() on that result ? That's my current interpretation of the documentation excerpt above so far.
Tail remaining
In the post I linked, a mention about adding additional functions to know if the processing has finished was considered. Was this ever implemented ? This may not be a relevant question however, depending the answer to my previous question.
Simulation in separate threads ?
Right now for convenience I run the direct simulation on the main thread of my engine (but I apply the direct effects in a separate threads however, where I do the audio mixing).
As I'm looking into setting up the simulation for reflection, I'm wondering if the simulation object should be split or the same one between Direct and Reflection. Is there a benefit for one or the other approach ? Given occlusion and reflection could run in their own separate threads, I was wondering if sharing the same simulation object between the two was important or not.
The documentation doesn't really cover that case, as examples are shown in very separate ways (which makes sense when starting up).
I have a similar question regarding Source objects, if their origin differs between Direct and Reflection simulations I would intuitively make them separate object and only care about the relevant settings tailored for each kind of simulation.
Apologies in advance if my questions don't make a lot of sense. I'm having some troubles figuring out how I should setup reflection and how to stream the audio result. (I'm new at this. 😅)
Thank you in advance for taking the time to read and reply.
Cheers ❤