Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Supported by

[solved] Expanding Mantra for my own research

edited August 2012 in Miscellaneous

As I mentioned in my emails to Sebastiaan, I am investigating the possibility of expanding Mantra for my own research project, as it already seems to do some of the tasks I require. I guess in this thread I'll ask for feedback on my ideas and some amateurish questions which expose my true lack of knowledge of all things Python and tracking :)

My first question is related to the output (log) file. I can't find anywhere in the documentation what each of the columns in the log file denote. Would it be possible to get a left-to-right explanation of them all?

Comments

  • edited October 2011

    Hi Ismael,

    Welcome to the forum!

    I finally got around to doing a bit of work on Mantra (notably updating the installer to work with newer versions of Python) and I also extended the documentation a bit. It now describes the structure of the log-file:

    Documentation

    Hope this helps!

    Kindest regards,
    Sebastiaan

  • edited 4:17AM

    Thanks for the updated documentation. I have had a little play with Mantra in several lighting situations and I'm still struggling to get reliable tracking, even after adjusting the webcam settings. I'll keep trying though...

    I am not really experienced in Python (never used it) or C (last used it in 1998!) but I am valiantly trying to follow the source code to see how everything works. Two things I would like to understand are:

    1) Is it possible to hard-code a target colour (or, ideally, a fuzzy range of shades of a colour) as the object to be tracked (rather than clicking and adjusting fuzziness)? I would like to construct simple scenes with some sort of predefined "dictionary" of objects where (for example) {fuzzyRED object = ball}{fuzzyBLUE object = block}{fuzzyBROWN object = hand} and so on.

    2) Is it possible to simplify the tracking by removing the Z value, and carrying out tracking in 2D only?

    Thanks!

  • edited 4:17AM
    Thanks for the updated documentation. I have had a little play with Mantra in several lighting situations and I'm still struggling to get reliable tracking, even after adjusting the webcam settings. I'll keep trying though...

    I'm sorry to hear that. Do you feel that the camera image is of sufficient quality, comparable to the demonstration video? If so, are the objects that you've chosen sufficiently distinct from the background? You could, for example, consider using brightly colored stickers.

    1) Is it possible to hard-code a target colour (or, ideally, a fuzzy range of shades of a colour) as the object to be tracked (rather than clicking and adjusting fuzziness)? I would like to construct simple scenes with some sort of predefined "dictionary" of objects where (for example) {fuzzyRED object = ball}{fuzzyBLUE object = block}{fuzzyBROWN object = hand} and so on.

    Yes. In the GUI you can save and restore profiles (under Menu → File). And using a script, you can really specify the objects manually, as shown here: http://www.cogsci.nl/software/mantra#script

    2) Is it possible to simplify the tracking by removing the Z value, and carrying out tracking in 2D only?

    No, you always get the Z-coordinate. But you don't have to use it! There's no additional computational cost for determining the Z-coordinate (and not using it).

    Hope this helps!

  • edited 4:17AM

    Thanks again for the comments. As a Python amateur, how would I go about running the script (yes, I really am that amateur...)?

  • edited 4:17AM

    You would paste the script into a file and run it by typing "python [filename]" in a terminal. However, if you're new to Python I would advise you to go through a Python tutorial! "A Byte of Python" is an excellent start: http://www.ibiblio.org/swaroopch/byteofpython/files/120/byteofpython_120.pdf

  • edited 4:17AM

    I'm back looking into this after a brief hiatus, and my supervisor says I have a week to determine whether I will make use of Mantra or not. This is quite a scary thought as I'm only just beginning to understand it.

    At the moment I'm trying to set up Mantra on a laptop running 11.04, but when I type in "sudo ./install.sh" in the terminal (Mantra folder), the terminal responds with "sudo: ./install.sh: command not found" The install.sh file is definitely present in the folder.

    Any idea what the problem is here? When I installed the previous version of Mantra on 10.04 I don't remember encountering this issue.

    Thanks!

  • edited 4:17AM

    I've solved the installation issue now (for the record, I had to type "sudo sh ./install.sh" for it to work). Object tracking seems to be working a little better now, apart from towards the edge of the scene, where the program claims "object lost" every time. I think my main problem is I don't know the effect of each of the webcam settings so I am constantly tinkering...

    Also, in the terminal window, while Mantra is running, I'm getting lots of errors in libv4l2, whatever that is...

  • edited 4:17AM

    After a long evening of playing around with Mantra I am no closer to getting the settings and lighting right. I am trying to set up a scene with three objects (a moving hand, a moving ball/highlighter/pencil/small object with solid colour, and a static desktop), but I have yet to achieve a single trial with successful tracking. In general, the system identifies all three objects even when they are not in the scene, and also considers the static desktop to be moving! Very puzzling. I'm sure it's an issue with my experiment rather than the program.

    One further question for tonight: is there any way to reset the camera options? After too much tinkering with hue/saturation levels, I found I had blue skin and my orange pen was green. I seemed to remember a "reset defaults" button but I couldn't find it. I tried deleting the hidden .conf file, but the camera settings persist regardless. Your help would be much appreciated!

  • edited 4:17AM
    I've solved the installation issue now (for the record, I had to type "sudo sh ./install.sh" for it to work).

    This is a bit unintuitive, but it's a general *nix thing. The problem is that programs need to be marked "executable", otherwise the system won't run them. You can generally do this by right-clicking on a file and check the "executable" checkbox under "permissions" (or something along those lines). By pre-pending "sh" you can also run scripts that are not marked executable (which is why this worked for you).

    Regarding your difficulties with tracking. Have you tried a different camera? Have you made sure that the objects that you're trying to track have a unique and distinct colour (i.e., a colour that's not present elsewhere in the image)?

    Regarding playing around with the camera settings, you could try a program called guvcview. It's more convenient than Mantra when it comes to manipulating the camera settings. To install:

    sudo apt-get install guvcview

    I'm sorry that you're having so much trouble! It really shouldn't be this hard, so I'm a bit puzzled... If your objects have a suitable colour and you have sufficient lighting, and tracking still doesn't work properly, this may indicate that the camera is note suitable.

    One final thing that comes to mind (because you use the word "desktop") is that you may be trying to track stimuli on a computer display? If so, that could explain why you're having trouble. Tracking stimuli on a computer display is possible (I've done it, as explained in the paper), but it's much more difficult than tracking real objects, because the computer screen tends to cause a lot of glare.

  • edited 4:17AM

    Thanks for the reply. I think the problems I'm experiencing are a combination of a non-ideal camera (it's this one: http://www.amazon.co.uk/gp/product/B0013QOIN0), an old (slow) laptop, uneven lighting and my own incompetence!

    By desktop I actually meant a real desktop (or a solid coloured surface masquerading as one). The simple trial I am trying to track is my hand (wearing a very weird-looking blue latex glove) holding a brightly coloured ball/highlighter, and then dropping it so that it comes to rest on a bright green desktop. The colours are pretty much unique in the scene when I identify the objects, but seem to be confused when tracking. From this scene I would initially like to extract the path of the ball/object as it moves down and comes to rest on the desktop.

    It did occur to me that maybe Mantra's sampling rate could be too slow to log the path of an object being dropped in this way, although I suspect that was just my incredibly old laptop's lack of processing power. How frequently does Mantra write to the log file?

    Regarding the camera settings, are the changes made within Mantra affecting the camera universally (i.e. in a driver or similar, so that had I attempted to use the cam for Skype I would have had blue skin etc) or just within Mantra? Is there any way to automatically reset the settings to default? I will try out the program you suggested, anyway.

    Thanks again for your time and patience!

  • edited 4:17AM
    Thanks for the reply. I think the problems I'm experiencing are a combination of a non-ideal camera (it's this one: http://www.amazon.co.uk/gp/product/B0013QOIN0), an old (slow) laptop, uneven lighting and my own incompetence!

    You may be right (aside from the incompetence part).

    It did occur to me that maybe Mantra's sampling rate could be too slow to log the path of an object being dropped in this way, although I suspect that was just my incredibly old laptop's lack of processing power. How frequently does Mantra write to the log file?

    The sampling rate is determined by the camera (unless you have a high-speed camera, in which case other factors may become limiting). A modern webcam usually gives 25 frames per second (40Hz), so really fast movements cannot be tracked. Unfortunately, this is a hardware limitation of the camera, not due to the speed of your laptop or Mantra.

    One thing that you could try is disabling the auto-exposure setting (or something similar), if there is one. Some camera's have this setting, which causes the shutter time to be adjusted based on the amount of light that's available, so you get a clear (but slower) video stream. Disabling this will make sure that the camera will record at its maximum frame rate, but still no faster than that.

    Regarding the camera settings, are the changes made within Mantra affecting the camera universally (i.e. in a driver or similar, so that had I attempted to use the cam for Skype I would have had blue skin etc) or just within Mantra? Is there any way to automatically reset the settings to default? I will try out the program you suggested, anyway.

    This appears to depend on the camera. In my experience, most cameras remember the settings across multiple sessions, in which case the Mantra settings also affect Skype etc. Regarding the reset-to-default, this can be done in Guvcview.

    Good luck!

  • edited 4:17AM

    A question. I'm trying to work my way through the camera.c program to understand the representation. Am I correct in thinking the only way objects are represented are in terms of their RGB values and fuzziness?

    I'm currently trying to think of a way to consider the boundaries of several objects to see if they are interacting (for example, if one object is resting on another, or if one object is suspended from another, or if one is contacting another briefly).

    Obviously this is a complex task when considering groups of pixels, but if I can somehow specify the boundaries of the pixel clusters to check their relative proximity and orientation that might be a rough and ready method of approaching it in a simplified and stylised 2D world. Any thoughts?

  • edited 4:17AM
    A question. I'm trying to work my way through the camera.c program to understand the representation. Am I correct in thinking the only way objects are represented are in terms of their RGB values and fuzziness?

    Yes, that's correct.

    I'm currently trying to think of a way to consider the boundaries of several objects to see if they are interacting (for example, if one object is resting on another, or if one object is suspended from another, or if one is contacting another briefly). Obviously this is a complex task when considering groups of pixels, but if I can somehow specify the boundaries of the pixel clusters to check their relative proximity and orientation that might be a rough and ready method of approaching it in a simplified and stylised 2D world. Any thoughts?

    This goes somewhat beyond the scope of Mantra. Mantra will give you the (average) locations of objects, but doesn't provide any logic to see when and how objects interact. It also doesn't provide you with the boundaries or the orientation of an object (although you could do this indirectly by attaching coloured markers to the ends of and object).

    I'm not sure what exactly you want to do, but if you want to analyse a scene beyond what Mantra provides (ie., simple tracking) you could consider using OpenCV. I initially built Mantra around OpenCV as well, but it was a bit too slow for comfortable real-time tracking (although this may not be an issue on a fast machine). OpenCV offers many routines for processing visual data, and also comes with Python bindings. It can do anything that Mantra can, and much more, but it does require scripting and has a bit of a learning curve.

    URL: http://opencv.willowgarage.com/documentation/python/index.html

  • edited 4:17AM

    I do realise what I'm interested in is beyond the scope of Mantra in its current state. I was thinking more about building on your code to expand its potential for what I am interested in.

    In case you're interested, my research interest is in computational modelling of motion verb acquisition, and I'd like to set up a system to analyse the interactions and movements of objects within scenes (as that is essentially what a verb labels). My current approach is to define objects and interactions as predicates with temporal logic, but having a model that can deal with actual visual input would be so much better. Unfortunately, I'm not sure I'll have the time or the necessary skills to expand Mantra in the ways I would like to.

    The omens are certainly bad... I've had to reinstall Ubuntu seven times in the past three days while attempting to get usable output from Mantra. If that's not a sign...

    Thanks for all your help, anyway!

  • edited 4:17AM

    That sounds like an interesting project! But also a quite challenging one.

    Now that I understand what you want to do, my guess is that you're better of with OpenCV. I have designed Mantra for one specific purpose: tracking objects in psychological experiments. You can modify Mantra, of course, but that will likely take more time than devising something from scratch in OpenCV, because you will have to delve into the C code.

    Or perhaps it would be better to start with a purely virtual environment to begin with? And extend the project to deal with real camera input at a later stage? (None of my business of course)

    Good luck with your project! I hope the experience of working with Mantra hasn't been too frustrating.

  • edited 4:17AM

    I am trying to use Mantra together with OpenSesame. I have installed both software packages under Linux 10.04 and they both seem to be working fine. When I open OpenSesame, I do not see a Mantra icon. Would there be something I should do to enable the Mantra plugin in OpenSesame?

  • edited 4:17AM

    Hi Frouke,

    You need to download the plug-ins (it's actually a set of 3 plug-ins) and extract theme to the OpenSesame plug-in folder, which under Ubuntu would be

    /home/[user]/.opensesame/plugins

    You can find them here: http://www.cogsci.nl/software/mantra#opensesameplugin

    After you have installed the plug-ins, you should see 3 additional icons appear in the OpenSesame item toolbar. If this does not happen, please check under preferences (Tools → Preferences) if the plug-ins may have been disabled.

    Good luck and please let me know if this doesn't work for you!

    Cheers,
    Sebastiaan

  • edited 4:17AM

    Dear Sebastiaan,

    Thanks for the help. It worked!
    I now have the three additional icons.

    But also I got the following error message:
    Image and video hosting by TinyPic

    I am trying to run OpenSesame and Mantra on the same computer.

    The project I am trying to create is to present a letter on the screen, write a message to the Mantra log about this letter on the screen, and to have the person make the hand movement. At the end of the movement the experimenter presses the button to start the next trial.

  • edited 4:17AM

    Hi Frouke,

    I suspect that Mantra is not listening at the correct port, or has been started in TCP mode. Could that be it?

    While Mantra is tracking, you will see something like '40007 (udp)' in the top right of the window. The 40007 corresponds to the port number, which you can indicate in the mantra_connect plug-in. The 'udp' is the network protocol, which you can specify when you start Mantra. This is a bit of a technicality, but basically E-Prime requires 'tcp' mode, whereas the Python library (also used by OpenSesame) requires 'udp' mode. Since tcp is the default (OpenSesame didn't exist back then, so it was E-Prime oriented), I suspect this might the issue.

    Hope this helps! If it doesn't work, let me know and I'll take a closer look at it.

    Cheers,
    Sebastiaan

    PS. The experiment that you have in mind should be perfectly doable.

  • edited 4:17AM

    Dear Sebastiaan,

    I'm afraid the error message persists. Below a screenshots of the settings of the programs. I set Mantra at port 40007 and Opensesame at port 40007. I tried UDP (with or without restart) and TCP and I tried either start recording with Mantra before starting the OpenSesame script or not. Is the name of the data-file for Mantra important?

    Image and video hosting by TinyPic

    Thanks for your help,
    -Frouke

  • edited 4:17AM

    Yes indeed, it appears you're doing everything correctly. I just tried it on my own laptop, and for me it works. I wonder what it could be... Could you perhaps start Mantra from a terminal and post the terminal output? You can start Mantra as follows:

    qtmantra

    Then if you try to connect to Mantra from within OpenSesame, you should see some diagnostic output in the terminal. Alternatively, you can try to connect to Mantra simply using the Python interpreter. To do this, download libmantra.py from the Mantra page, and open a terminal where you have downloaded the file. The you start a Python interpreter like so:

    python

    and try to connect to Mantra like so:

    from libmantra import libmantra
    m = libmantra()
    print m.sample()

    This should print a line that looks like this:

    (0, (74.0, 162.0, 404.0))

    Or in your case, it probably gives a (hopefully informative) error message.

  • edited 4:17AM

    Dear Sebastiaan,

    The terminal unfortunately did not provide any output, see the image below:
    Image and video hosting by TinyPic

    The python option, however, did give a possibly interesting error message:
    Image and video hosting by TinyPic

    Thanks,
    -Frouke

  • edited 4:17AM

    Thank you, that's indeed very useful. The problem is apparently a bug in libmantra.py. It tries to 'bind' to particular port (30007). This is not necessary at all, but usually does no harm either. In your case, perhaps another program was using that port and the result is a conflict.

    I fixed this issue, at least I believe so. Perhaps you could download the latest code snapshot from here:

    You'll find the updated plug-ins in the opensesame folder in the archive. You can simply replace the old plug-ins. You don't need to re-install Mantra itself, just the plug-ins (or actually just libmantra.py, which is part of the mantra_connect plug-in).

    Could you let me know if this solves your issue? In that case I'll upload it to Mantra website.

  • edited 4:17AM

    Dear Sebastiaan,

    I have replaced the different libmantra.py files in my opensesame folder, but still get an error message. Below a screenshot. Would there be a way to determine whether I have replaced the correct files? Is there a particular line in the code I should look at?

    Image and video hosting by TinyPic

    If I use the python code. I no longer get an error message. However, it does not show me the contents of the m-variable (within the 10 minutes that I waited for the output). See the image below. Is that a problem?

    Image and video hosting by TinyPic

    Thanks,
    -Frouke

  • edited May 2013

    Right, that's one important step further. You have updated it correctly, as far as I can see, and the port-bind error is consequently gone. Right now the Mantra server is simply not answering. The two most obvious causes are a) Mantra is not running (it really needs to be in recording state, just starting the GUI isn't enough) or b) Mantra is listening at a different port.

    Is it either of those?

    For debugging, I would focus on the Python prompt, because it's easier to see the error messages. If you do the following,

    from libmantra import libmantra
    m = libmantra(port=40007) # Change port to whatever the Mantra server reports
    print m.comm('HI', 1)
    

    Mantra should say 'HI' back.

  • edited 4:17AM

    Starting the recording in Mantra before starting the OpenSesame script was what was missing. Problem solved!

    Now I have to figure out why OpenSesame says "An unexpected error occurred, which was not caught by OpenSesame. This should not happen! Message: 'str' object is not callable" at the end of my experiment. But that's a different topic :-)

    Many thanks!

Sign In or Register to comment.

agen judi bola , sportbook, casino, togel, number game, singapore, tangkas, basket, slot, poker, dominoqq, agen bola. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 50.000 ,- bonus cashback hingga 10% , diskon togel hingga 66% bisa bermain di android dan IOS kapanpun dan dimana pun. poker , bandarq , aduq, domino qq , dominobet. Semua permainan bisa dimainkan hanya dengan 1 ID. minimal deposit 10.000 ,- bonus turnover 0.5% dan bonus referral 20%. Bonus - bonus yang dihadirkan bisa terbilang cukup tinggi dan memuaskan, anda hanya perlu memasang pada situs yang memberikan bursa pasaran terbaik yaitu http://45.77.173.118/ Bola168. Situs penyedia segala jenis permainan poker online kini semakin banyak ditemukan di Internet, salah satunya TahunQQ merupakan situs Agen Judi Domino66 Dan BandarQ Terpercaya yang mampu memberikan banyak provit bagi bettornya. Permainan Yang Di Sediakan Dewi365 Juga sangat banyak Dan menarik dan Peluang untuk memenangkan Taruhan Judi online ini juga sangat mudah . Mainkan Segera Taruhan Sportbook anda bersama Agen Judi Bola Bersama Dewi365 Kemenangan Anda Berapa pun akan Terbayarkan. Tersedia 9 macam permainan seru yang bisa kamu mainkan hanya di dalam 1 ID saja. Permainan seru yang tersedia seperti Poker, Domino QQ Dan juga BandarQ Online. Semuanya tersedia lengkap hanya di ABGQQ. Situs ABGQQ sangat mudah dimenangkan, kamu juga akan mendapatkan mega bonus dan setiap pemain berhak mendapatkan cashback mingguan. ABGQQ juga telah diakui sebagai Bandar Domino Online yang menjamin sistem FAIR PLAY disetiap permainan yang bisa dimainkan dengan deposit minimal hanya Rp.25.000. DEWI365 adalah Bandar Judi Bola Terpercaya & resmi dan terpercaya di indonesia. Situs judi bola ini menyediakan fasilitas bagi anda untuk dapat bermain memainkan permainan judi bola. Didalam situs ini memiliki berbagai permainan taruhan bola terlengkap seperti Sbobet, yang membuat DEWI365 menjadi situs judi bola terbaik dan terpercaya di Indonesia. Tentunya sebagai situs yang bertugas sebagai Bandar Poker Online pastinya akan berusaha untuk menjaga semua informasi dan keamanan yang terdapat di POKERQQ13. Kotakqq adalah situs Judi Poker Online Terpercayayang menyediakan 9 jenis permainan sakong online, dominoqq, domino99, bandarq, bandar ceme, aduq, poker online, bandar poker, balak66, perang baccarat, dan capsa susun. Dengan minimal deposit withdraw 15.000 Anda sudah bisa memainkan semua permaina pkv games di situs kami. Jackpot besar,Win rate tinggi, Fair play, PKV Games