[open] Hardware for timing.
Hi all,
This is related to a previous question regarding temporal accuracy for sound presentation.
Myself and Gary, our excellent technician, have used an oscilloscope to measure the temporal accuracy of sound presentation.
Depending on the computer used we're seeing 30-90 ms out which is to be exected based on a previous discussion here. This is a consistent presentation time on each computer, which allowed us to try something.
We've come up with a technique to use frames that are presented as a buffer for the sound which means we can get it to present in a small sequence with an accuracy of 5ms either side of the intended time. So it ranges from 100% accurate to 5ms below or above.
The thing is when we use this approach in a larger and more complicated sequence (which puts more demand on processing) the visual timings are no longer consistent.
We think this is likely a hardware problem.
I'm planning on requesting new hardware to address these issues. We need visual and audio accuracy and probably a kick-ass processor.
I was wondering if anyone here could give me an idea of what might be a good system to set up to best address these needs.
Tomorrow will be the deadline for departmental requests otherwise I'll be paying for the equipment out of my own pocket, so if anyone can help by then it would be great.
Thanks folks,
Boo.
ADDITIONAL QUERY: I think I have my visual stimuli drawn outside the screen size and OpenSesame resizes it on each call. Would this be the main cause of the frames being dropped that are being reported? I plan on changing this anyway.
Comments
Right, so you are essentially measuring the playback delay, and then start sound playback a little bit before the sketchpad so that they are presented at the same time? I suppose that's a good trick, especially if you have the equipment to verify that it actually works.
Regarding the system, I personally couldn't say, as I basically use whatever happens to be in the lab. But I suppose most modern systems should be fine, especially if they have a good graphics and sound card (i.e. preferably not built-in).
No, I don't think that should be a problem per se: OpenSesame does not resize or move anything. If you are concerned about frame dropping, perhaps you could describe your trial sequence in more detail, so that we can think of improvements? In principle, even long trial sequences should not suffer from significant timing issues.
Check out SigmundAI.eu for our OpenSesame AI assistant!
I've had a suggestion for a video card by a perception researcher: Cambridge Research Systems, it's called apparently and is supposedly very high standard.
I'm still looking into processors and audio cards.
Boo.
Sorry, just saw you replied. Will read now.
Thanks for your reply.
The buffer approach appeared to work indeed. We used the oscilloscope to measure the beep followed by the next sketchpad. The output showed that the beep played approx 90ms after it's call which lined up with where we wanted it quite well.
As I said above I think I'll make an attempt to acquire the vidoe card mentioned. However, I'm still looking into the audio and processing options.
If anyone has any suggestions they are welcome.
I'll get back to you regarding the dropped frames as I'm going to make some changes today.
Thanks,
Boo.
Hi,
An update on the audio visual temporal accuracy.
Gary, our wonderful IT engineering guru, conceived and developed a solution to my problems.
Essentially he created an external switch that is connected to the PC (receiving commands to switch on or off via OS) and an external CD player and amplifier.
He recorded an hour long CD of the required tone (3500kH) and it plays constantly. Depending upon the condition the switch is 'flicked' and plays the CD tone for 7ms (this time is coded into the switch but can be changed per experiment but not trial yet...) to the appropriate ear (left, middle or right).
He has also factored the relation of when the tone and visual image is called to actual display time. This has resulted in an accuracy level in the order of micro-seconds.
For all intent and purposes we have a tone played at the exact same time as the image is displayed.
I'll post a link to his detailed explanation and timing data soon.
There are some lovely colourful oscilloscope graphs included also.
Boo.
Gary is getting famous! Someone (Anthony) mentioned him in another thread as well, in connection with a button box that he developed (which indeed looked very good).
Check out SigmundAI.eu for our OpenSesame AI assistant!
And deservedly so! Gary is really bringing some reliable and affordable solutions to us mere non-engineer types.
As Arnold used to say, I'll be back with a link to the nice data soon.
And here we go:
http://psy.swan.ac.uk/staff/freegard/audio switch report.pdf