Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[solved] Using OpenSesame for Android Tablet Use (touch interaction)

edited September 2014 in OpenSesame

Hello,

For my research project I need to create a simple experiment for use on a tablet. What I want is to display a still image, then the participant has to either move/drag the image away from them or towards them (with their RTs, etc. recorded e.g. how long before they move the image, how long it takes for them to move the image, etc.). I was wondering if anyone could help as to whether this is possible using OpenSesame? I'm (very) new to it, but just wanted a bit of advice before I try and learn everything and then it turns out to be fruitless!

Thanks in advance,

Charlotte

Comments

  • edited August 2014

    Hi Charlotte,

    Welcome to the forum! The brief answer is: yes, this is possible. But it will require some Python scripting, as what you want isn't that simple (it's not too complicated either, but it might look like that if you're new to programming).

    The general idea would be simple: a touch response is very similar to a mouse response, so you could simple use the mouse functions. What you will want to do, is to continuously monitor the mouse position and change the display accordingly. To do this, you use a canvas (documentation). The code snippet below is from the mouse documentation page, and shows you the basic principles of mouse-display interaction. To see it in action, place it in the Run phase of an inline script.

    # to be able to use the functions, you will need to
    # import them first!
    from openexp.mouse import mouse
    from openexp.canvas import canvas
    
    # next we create so called 'instances', which we need
    # to use the mouse and canvas functions
    my_mouse = mouse(exp)
    my_canvas = canvas(exp)
    
    # this while loop runs until 'break' is called
    while True:
        # check if there is a click
        button, position, timestamp = my_mouse.get_click(timeout=20)
        # if there is one, we should stop the loop
        if button != None:
            break
        # get the current mouse position
        pos, time = my_mouse.get_pos()
        # clear the entire canvas, erasing the
        # previously visible dot
        my_canvas.clear()
        # draw a new dot on the mouse position
        my_canvas.fixdot(pos[0], pos[1])
        # now update the display, so the new
        # dot will become visible on-screen
        my_canvas.show()
    

    There are a few things you will have to adjust before this works for your purposes. The first being that you do not want to show a simple dot, but an image. Also, we don't want the interaction to end on a click, but rather when the image has reached a position. So we change the script to the following:

    # to be able to use the functions, you will need to
    # import them first!
    from openexp.mouse import mouse
    from openexp.canvas import canvas
    
    # next we create so called 'instances', which we need
    # to use the mouse and canvas functions
    my_mouse = mouse(exp)
    my_canvas = canvas(exp)
    
    # we need to know where the image file is located
    # before we can show it (replace the image name
    # by the name of your image!)
    path = exp.get_file(u'image_in_pool.png')
    
    # we want to define the ending position, i.e.
    # the correct response location
    corloc = (200, 200)
    # we also want to define the maximal distance
    # between the image and the correct response
    # location (this is in pixels)
    maxdistance = 50
    
    # this while loop runs until 'break' is called
    while True:
        # get the current mouse position
        pos, time = my_mouse.get_pos()
        # clear the entire canvas, erasing the
        # previously visible image
        my_canvas.clear()
        # draw a new image on the mouse position
        my_canvas.image(path, x=pos[0], y=pos[1])
        # now update the display, so the new
        # dot will become visible on-screen
        my_canvas.show()
        # calculate the distance between the image
        # and the correct response location
        distance = ((corloc[0]-pos[0])**2 + (corloc[1]-pos[1])**2)**0.5
        # if the image is close enough, we should
        # stop the loop
        if distance < maxdistance:
            break
    

    (more in next post...)

  • edited August 2014

    (...continuing from previous post)

    We're nearly there now. The final steps would be to 1) wait for the first touch to save the RT, and 2) record how long the movement lasted.

    # to be able to use the functions, you will need to
    # import them first!
    from openexp.mouse import mouse
    from openexp.canvas import canvas
    
    # next we create so called 'instances', which we need
    # to use the mouse and canvas functions
    my_mouse = mouse(exp)
    my_canvas = canvas(exp)
    
    # we need to know where the image file is located
    # before we can show it (replace the image name
    # by the name of your image!)
    path = exp.get_file(u'image_in_pool.png')
    
    # we want to define the ending position, i.e.
    # the correct response location
    corloc = (200, 200)
    # we also want to define the maximal distance
    # between the image and the correct response
    # location (this is in pixels)
    maxdistance = 50
    
    # draw the image in the canvas centre
    my_canvas.image(path)
    
    # show the image and get the starting timestamp
    trialstart = my_canvas.show()
    
    # wait for the first touch
    button, position, movstart = my_mouse.get_click(timeout=None)
    
    # this while loop runs until 'break' is called
    while True:
        # get the current mouse position
        pos, time = my_mouse.get_pos()
        # clear the entire canvas, erasing the
        # previously visible image
        my_canvas.clear()
        # draw a new image on the mouse position
        my_canvas.image(path, x=pos[0], y=pos[1])
        # now update the display, so the new
        # dot will become visible on-screen
        timestamp = my_canvas.show()
        # calculate the distance between the image
        # and the correct response location
        distance = ((corloc[0]-pos[0])**2 + (corloc[1]-pos[1])**2)**0.5
        # if the image is close enough, we should
        # stop the loop
        if distance < maxdistance:
            break
    
    # calculate the movement time
    movtime = timestamp - movstart
    # calculate the RT
    resptime = movstart - trialstart
    
    # store the variables so that they are accessible in
    # the OpenSesame GUI
    exp.set("movetime", movtime)
    exp.set("reponse_time", resptime)
    

    Please note that this is simply an example of a bit of code that does something that resembles what you want to do. Of course, it will need some tweaking! Have a look at the documentation page and try to make some of the tutorials before you start. At least now you know you won't be doing it for nothing!

    Good luck!

    Edwin

  • edited 7:34AM

    Thank you so much! It does seem complicated seeing as I'm new to programming, but I'm hoping that I will be able to pick it up somehow.

    My only issue is, when I try and run it (to test it) it says 'pos is not defined' - I'm sure there's a simple solution and I've tried setting the mouse pos to (0,0) (which I understand is the 'default'?) but that doesn't seem to have helped, and I can't seem to find an immediate answer after having a look for one.

    Sorry for the simple question! And thank you again,

    Charlotte

  • edited 7:34AM

    Oops, very sorry, that's my mistake! I've updated the code in my second post.

  • edited 7:34AM

    Thank you, you've been such a great help! My one niggle is that, even when dragged to the top of the screen, the experiment doesn't end. I imagine this is just because the correct response location is presumably in the wrong place, or the max distance between the image and correct response location needs to be altered. Are both of these in pixels? I'm assuming here that (200,200) for corloc is the top and bottom of the screen.

    Sorry for all the questions, and thank you again!

  • edited 7:34AM

    I don't know if I've been very clear after reading back over my last comment! As I only want my participants to be able to drag the image either upwards or downwards, I imagine I need a 'point of no return' - as in, once the image has been dragged past a certain point on the screen, it is recorded as being dragged that way (as which way the image is being dragged, either towards or away from them, is crucial to the experiment).

    I've just been having the issue that I can't drag the image to a point where it is accepted as a response, I imagine because I need to alter where the correct response location is!

    Thanks again,

    Charlotte

  • edited August 2014

    Hi Charlotte,

    Where (200,200) is depends on your screen's resolution. In general, it should be in the top-left quadrant. Best thing to remember is that (0,0) is the top-left corner, and that the horizontal value increase as you move rightward over the screen, and the vertical as you move downward. *

    The second number in those coordinates denotes the vertical position, so you will probably want to use that for your dragging experiment. For example, you could check whether the vertical position is either below a lower bound (i.e. in the top of the screen), or above an upper bound (i.e. in the bottom of the screen). Have a look at the code below, which explains the idea:

    # to be able to use the functions, you will need to
    # import them first!
    from openexp.mouse import mouse
    from openexp.canvas import canvas
    
    # next we create so called 'instances', which we need
    # to use the mouse and canvas functions
    my_mouse = mouse(exp)
    my_canvas = canvas(exp)
    
    # we need to know where the image file is located
    # before we can show it (replace the image name
    # by the name of your image!)
    path = exp.get_file(u'image_in_pool.png')
    
    # the upper bound is 200 pixels above the screen
    # bottom (that's a bit confusing, I know)
    upper = exp.get("height") - 200
    # the lower bound will be 200 pixels below the
    # screen top (again, a bit confusing, sorry)
    lower = 200
    
    # draw the image in the canvas centre
    my_canvas.image(path)
    
    # show the image and get the starting timestamp
    trialstart = my_canvas.show()
    
    # wait for the first touch
    button, position, movstart = my_mouse.get_click(timeout=None)
    
    # this while loop runs until 'break' is called
    while True:
        # get the current mouse position
        pos, time = my_mouse.get_pos()
        # clear the entire canvas, erasing the
        # previously visible image
        my_canvas.clear()
        # draw a new image on the mouse position
        my_canvas.image(path, x=pos[0], y=pos[1])
        # now update the display, so the new
        # dot will become visible on-screen
        timestamp = my_canvas.show()
        # calculate the distance between the image
        # check if the image location is below the lower
        # bound (i.e. at the screen top)
        if pos[1] < lower:
            response = "top"
            break
        # check if the image location is above the upper
        # bound (i.e. at the screen bottom)
        if pos[1] > upper:
            response = "bottom"
            break
    
    # calculate the movement time
    movtime = timestamp - movstart
    # calculate the RT
    resptime = movstart - trialstart
    
    # store the variables so that they are accessible in
    # the OpenSesame GUI
    exp.set("response", response)
    exp.set("movetime", movtime)
    exp.set("reponse_time", resptime)
    

    *Please note that this is a convention that not everybody uses, but it is very common.

  • edited 7:34AM

    Thank you so much, it worked perfectly! The last thing I want to do is see where the participant touched to move the image (it's a study on sexual interest, so this bit is of interest!) I've looked around for various answers, but I'm not sure what the best solution is. The likelihood is that I'll use a Tobii eye-tracker in parallel to the tablet in order to compare the two methods (pretty much the main aim of the project!), so I'm unsure of whether to divide each image into AOIs like it will be on Tobii and found this discussion on here: http://forum.cogsci.nl/index.php?p=/discussion/1041/open-areas-of-interest-eye-tracking/p1

    I don't know whether this will be apt for me, or whether I should get the exact mouse position. For this, I've found various solutions e.g.

    win32gui.GetCursorPos(point)
    flags, hcursor, (x,y) = win32gui.GetCursorInfo()

    Or-

    from ctypes import windll, Structure, c_ulong, byref

    class POINT(Structure):
    fields = [("x", c_ulong), ("y", c_ulong)]

    def queryMousePosition():
    pt = POINT()
    windll.user32.GetCursorPos(byref(pt))
    return { "x": pt.x, "y": pt.y}

    pos = queryMousePosition()
    print(pos)

    There are also many more that I've seen that use 'Tkinter'. I was just wondering A) if the ones I have found will do what I want, and B) whereabouts in the code to put it.

    Again, thanks so much for all your help!

    Charlotte

  • edited 7:34AM

    Hi Charlotte,

    The solutions you have found, seem tp be specifically for GUIs (tkinter is a module to build those in Python). The solution for OpenSesame actually is much simpler!

    As you can see in the script, directly before the while loop, my_mouse.get_click is called, and its returned values are stored in some variables. The position is stored as position. This is where the participant touched the screen first. To store this, simply set is so that OpenSesame knows of its existence: exp.set ("position", position), and save it using a logger.

    An important thing to note, is that the current script draws the image centred around the touch/mouse position. So storing the mouse position over time will give you the centre of the image over time, regardless of where in the image the first touch occurred!

    For the AOIs: these are usually (for good reasons!) applied only after data collection, using the gaze position data. You could use exactly the same approach using mouse coordinates (although you should keep in mind my previous comment on the image being centred around the mouse position).

    If you need further explanation, please do ask!

  • edited September 2014

    Thank you again! I would just like to clarify - does this mean that I can't discover where the image was first touched/the mouse coordinates to create AOIs as it will always give me the centre of the image? Or will the first touch be accurate as to where it is on the image, but subsequent tracking of the touch will just give the centre position?

    Sorry if I've misunderstood, it's just this part of the experiment is crucial so I am eager to know if it's possible!

    Thanks again,

    Charlotte

  • edited 7:34AM

    Hi Charlotte,

    Sorry for the tardy reply, I've been a little busy! To clarify: the initial situation will have an image that is centrally displayed. The initial touch will be anywhere on that image or on the rest of the screen. From this initial touch, you could figure out where on the image the participant put his/her finger.

    Directly after this initial touch, however, the script (as it is above), will move the image's centre to the location of the touch. So from that point on, all touch locations will be on the image centre.

    Does this make it clear?

    Best,

    Edwin

  • edited September 2014

    That's fine, thank you for the reply! I understand now. I was just wondering if I had to write a bit of code to close the experiment automatically? As I'll be running it on a tablet there are obviously no keys, so I was wondering if there was some way to close the programme (e.g. if a participant no longer wants to participate) without the use of a keyboard?

    Thank you again,

    Charlotte

  • edited 7:34AM

    Hi Charlotte,

    Clicking the top left and the top right of the screen within two seconds should kill the experiment, like the Escape key does in the desktop version. Alternatively, pressing the Back button on your phone might work as well (I believe the Escape key's function is mapped onto the Back key).

    Good luck!

  • edited 7:34AM

    Hello,

    A bit of a revival on this thread! The code works perfectly and I'm about to start testing on a Galaxy Tab S (which it works very well on). However, I was just wondering how I would go about displaying words at the top and bottom of the screen? (I want the words 'like' and 'dislike' displayed as I'm going to have a congruent and incongruent condition and don't want participants to get confused about which way they're meant to swipe!). I assumed it would involve the 'print' command, but I can't seem to figure out how to get it to display the words constantly during the entire experiment.

    Sorry for such a simple question and thank you for all the help so far!

    Charlotte

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games