Skip to content Skip to sidebar Skip to footer

Real-time Operating Via Python

So I am an inexperienced Python coder, with what I have gathered might be a rather complicated need. I am a cognitive scientist and I need precise stimulus display and button press

Solution 1:

When people talk about real-time computing, what they mean is that the latency from an interrupt (most commonly set off by a timer) to application code handling that interrupt being run, is both small and predictable. This then means that a control process can be run repeatedly at very precise time intervals or, as in your case, external events can be timed very precisely. The variation in latency is usually called "jitter" - 1ms maximum jitter means that an interrupt arriving repeatedly will have a response latency that varies by at most 1ms.

"Small" and "predictable" are both relative terms and when people talk about real-time performance they might mean 1μs maximum jitter (people building inverters for power transmission care about this sort of performance, for instance) or they might mean a couple of milliseconds maximum jitter. It all depends on the requirements of the application.

At any rate, Python is not likely to be the right tool for this job, for a few reasons:

  • Python runs mostly on desktop operating systems. Desktop operating systems impose a lower limit on the maximum jitter; in the case of Windows, it is several seconds. Multiple-second events don't happen very often, every day or two, and you'd be unlucky to have one coincide with the thing you're trying to measure, but sooner or later it will happen; jitter in the several-hundred-milliseconds region happens more often, perhaps every hour, and jitter in the tens-of-milliseconds region is fairly frequent. The numbers for desktop Linux are probably similar, though you can apply different compile-time options and patch sets to the Linux kernel to improve the situation - Google PREEMPT_RT_FULL.
  • Python's stop-the-world garbage collector makes latency non-deterministic. When Python decides it needs to run the garbage collector, your program gets stopped until it finishes. You may be able to avoid this through careful memory management and carefully setting the garbage collector parameters, but depending on what libraries you are using, you may not, too.
  • Other features of Python's memory management make deterministic latency difficult. Most real-time systems avoid heap allocation (ie C's malloc or C++'s new) because the amount of time they take is not predictable. Python neatly hides this from you, making it very difficult to control latency. Again, using lots of those nice off-the-shelf libraries only makes the situation worse.
  • In the same vein, it is essential that real-time processes have all their memory kept in physical RAM and not paged out to swap. There is no good way of controlling this in Python, especially running on Windows (on Linux you might be able to fit a call to mlockall in somewhere, but any new allocation will upset things).

I have a more basic question though. You don't say whether your button is a physical button or one on the screen. If it's one on the screen, the operating system will impose an unpredictable amount of latency between the physical mouse button press and the event arriving at your Python application. How will you account for this? Without a more accurate way of measuring it, how will you even know whether it is there?

Solution 2:

Because you are trying to get a scientific measurement on a time delay in millisecond precision, I cannot recommend any process that is subject to time slicing on a general purpose computer. Whether implemented in C, or Java, or Python, if it runs in a time-shared mode, then how can the result be verifiable? You could be challenged to prove that the CPU never interrupted the process during a measurement, thereby distorting the results.

It sounds like you may need to construct a dedicated device for this purpose, with a clock circuit that ticks at a known rate and can measure the discrete number of ticks that occur between stimulus and response. That device can then be controlled by software that has no such timing constraints. Maybe you should post this question to the Electrical Engineering exchange.

Without a dedicated device, you will have to develop truly real-time software that, it terms of modern operating systems, runs within the kernel and is not subject to task switching. This is not easy to do, and it takes a lot of effort to get it right. More time, I would guess, than you would spend building a dedicated software-controllable device for your purpose.

Solution 3:

Most common operating systems' interrupts are variable enough to ruin timing in your experiment regardless of your programming language. Python adds it's own unreliability. Windows interrupts are especially bad. In Windows, most interrupts are serviced in about 4 milliseconds, but occasionally an interrupts last longer than 35 milliseconds! (Windows 7).

I would recommend trying the PsycoPy application to see if will work for you. It approaches the problem by trying to make the graphics card do the work in openGL, however some of it's code still runs outside the graphics card and is subject to the operating system's interrupts. Your existing python code may not be compatible with PsycoPy, but at least you would stay in Python. PsycoPy is especially good at showing visual stimulations without timing issues. See this page in their documentation to see how you would handle a button press: http://www.psychopy.org/api/event.html

To solve your problem the right way, you need a real-time operating system, such as LinuxRT or QNX. You could try your python application in one of those to see if running python in a real-time environment is good enough, but even python introduces variability. If python decides to garbage collect, you will have a glitch. Python itself isn't real time.

National Instruments sells a setup that allows you to program in real-time in a very easy-to-use programming language called LabviewRT. LabviewRT pushing your code into an FPGA daughter card that operates in real time. It's expensive.

I strongly suggest you don't just minimize this problem, but solve it, otherwise, your reviewers will be uncomfortable.

Solution 4:

If you are running the Python code on Linux machine, make the kernel low latency (preemptive). There is a flag for it when you compile the kernel.

Make sure that other processes running on the machine are minimum so they do not interrupt the kernel.

Assign higher task priority to your Python script.

Solution 5:

  1. Run the python interpreter on a real time operating system or tweaked linux.
  2. Shut down the garbage collector during the experiments and back on afterward.
  3. Maybe actively trigger a garbage collection round after the end of an experiment.

Additionally, keep in mind that showing an image is not instantaneous. You must synchronize your experiment with your monitor's vertical retrace phase (the pause between transmitting the last line of a frame of the display's content and the first line of the next frame).

I would start the timer at the beginning of the vsync phase after transmission of the frame containing whatever candidates are supposed to react to.

And one would have to keep in mind that the image is going to be ast least partially visible a bit earlier than that for purposes of getting absolute reaction times as opppsed to just well comparable results with ~ half a frame of offset due to the non-instantaneous appearance of the monitor's contents (~10 ms @ 60Hz).

Post a Comment for "Real-time Operating Via Python"