Skip to content Skip to sidebar Skip to footer

Lambda Versus List Comprehension Performance

I recently posted a question using a lambda function and in a reply someone had mentioned lambda is going out of favor, to use list comprehensions instead. I am relatively new to P

Solution 1:

Your tests are doing very different things. With S being 1M elements and T being 300:

[x forxin S foryin T if x==y]= 54.875

This option does 300M equality comparisons.

 

filter(lambda x:x in S,T)= 0.391000032425

This option does 300 linear searches through S.

 

[valforvalin S ifvalin T]= 12.6089999676

This option does 1M linear searches through T.

 

list(set(S) & set(T))= 0.125

This option does two set constructions and one set intersection.


The differences in performance between these options is much more related to the algorithms each one is using, rather than any difference between list comprehensions and lambda.

Solution 2:

When I fix your code so that the list comprehension and the call to filter are actually doing the same work things change a whole lot:

import time

S=[x for x inrange(1000000)]
T=[y**2for y inrange(300)]
##
time1 = time.time()
N=[x for x in T if x in S]
time2 = time.time()
print'time diff [x for x in T if x in S]=', time2-time1
#print N##
time1 = time.time()
N=filter(lambda x:x in S,T)
time2 = time.time()
print'time diff filter(lambda x:x in S,T)=', time2-time1
#print N

Then the output is more like:

time diff [x for x in T if x in S]=0.414485931396time diff filter(lambda x:x in S,T)=0.466315984726

So the list comprehension has a time that's generally pretty close to and usually less than the lambda expression.

The reason lambda expressions are being phased out is that many people think they are a lot less readable than list comprehensions. I sort of reluctantly agree.

Solution 3:

Q: Why is lambda etc being pushed aside?

A: List comprehensions and generator expressions are generally considered to be a nice mix of power and readability. The pure functional-programming style where you use map(), reduce(), and filter() with functions (often lambda functions) is considered not as clear. Also, Python has added built-in functions that nicely handle all the major uses for reduce().

Suppose you wanted to sum a list. Here are two ways of doing it.

lst = range(10)
print reduce(lambda x, y: x + y, lst)

printsum(lst)

Sign me up as a fan of sum() and not a fan of reduce() to solve this problem. Here's another, similar problem:

lst = range(10)
print reduce(lambda x, y: bool(x or y), lst)

printany(lst)

Not only is the any() solution easier to understand, but it's also much faster; it has short-circuit evaluation, such that it will stop evaluating as soon as it has found any true value. The reduce() has to crank through the entire list. This performance difference would be stark if the list was a million items long, and the first item evaluated true. By the way, any() was added in Python 2.5; if you don't have it, here is a version for older versions of Python:

def any(iterable):
    for x in iterable:
        if x:
            returnTruereturnFalse

Suppose you wanted to make a list of squares of even numbers from some list.

lst = range(10)
printmap(lambda x: x**2, filter(lambda x: x % 2 == 0, lst))

print [x**2for x in lst if x % 2 == 0]

Now suppose you wanted to sum that list of squares.

lst = range(10)
printsum(map(lambda x: x**2, filter(lambda x: x % 2 == 0, lst)))

# list comprehension version of the aboveprintsum([x**2for x in lst if x % 2 == 0])

# generator expression version; note the lack of '[' and ']'printsum(x**2for x in lst if x % 2 == 0)

The generator expression actually just returns an iterable object. sum() takes the iterable and pulls values from it, one by one, summing as it goes, until all the values are consumed. This is the most efficient way you can solve this problem in Python. In contrast, the map() solution, and the equivalent solution with a list comprehension inside the call to sum(), must first build a list; this list is then passed to sum(), used once, and discarded. The time to build the list and then delete it again is just wasted. (EDIT: and note that the version with both map and filter must build two lists, one built by filter and one built by map; both lists are discarded.) (EDIT: But in Python 3.0 and newer, map() and filter() are now both "lazy" and produce an iterator instead of a list; so this point is less true than it used to be. Also, in Python 2.x you were able to use itertools.imap() and itertools.ifilter() for iterator-based map and filter. But I continue to prefer the generator expression solutions over any map/filter solutions.)

By composing map(), filter(), and reduce() in combination with lambda functions, you can do many powerful things. But Python has idiomatic ways to solve the same problems which are simultaneously better performing and easier to read and understand.

Solution 4:

Many people have already pointed out that you're comparing apples with oranges, etc, etc. But I think nobody's shown how to a really simple comparison -- list comprehension vs map plus lambda with little else to get in the way -- and that might be:

$ python -mtimeit -s'L=range(1000)' 'map(lambda x: x+1, L)'
1000 loops, best of 3: 328 usec per loop
$ python -mtimeit -s'L=range(1000)' '[x+1forxin L]'
10000 loops, best of 3: 129 usec per loop

Here, you can see very sharply the cost of lambda -- about 200 microseconds, which in the case of a sufficiently simple operation such as this one swamps the operation itself.

Numbers are very similar with filter of course, since the problem is not filter or map, but rather the lambda itself:

$ python -mtimeit -s'L=range(1000)' '[x forxin L if not x%7]'
10000 loops, best of 3: 162 usec per loop
$ python -mtimeit -s'L=range(1000)' 'filter(lambda x: not x%7, L)'
1000 loops, best of 3: 334 usec per loop

No doubt the fact that lambda can be less clear, or its weird connection with Sparta (Spartans had a Lambda, for "Lakedaimon", painted on their shields -- this suggests that lambda is pretty dictatorial and bloody;-) have at least as much to do with its slowly falling out of fashion, as its performance costs. But the latter are quite real.

Solution 5:

First of all, test like this:

import timeit

S=[x for x in range(10000)]
T=[y**2for y in range(30)]

print"v1", timeit.Timer('[x for x in S for y in T if x==y]',
             'from __main__ import S,T').timeit(100)
print"v2", timeit.Timer('filter(lambda x:x in S,T)',
             'from __main__ import S,T').timeit(100)
print"v3", timeit.Timer('[val for val in T if val in S]',
             'from __main__ import S,T').timeit(100)
print"v4", timeit.Timer('list(set(S) & set(T))',
             'from __main__ import S,T').timeit(100)

And basically you are doing different things each time you test. When you would rewrite the list-comprehension for example as

[valforvalin T ifvalin S]

performance would be on par with the 'lambda/filter' construct.

Post a Comment for "Lambda Versus List Comprehension Performance"