class T:
def foo(self): pass
def bar(self): passFoundation
L class and helpers for it
Foundational Functions
working_directory
def working_directory(
path
):
Change working directory to path and return to previous on exit.
add_docs
def add_docs(
cls, cls_doc:NoneType=None, docs:VAR_KEYWORD
):
Copy values from docs to cls docstrings, and confirm all public methods are documented
add_docs allows you to add docstrings to a class and its associated methods. This function allows you to group docstrings together seperate from your code, which enables you to define one-line functions as well as organize your code more succintly. We believe this confers a number of benefits which we discuss in our style guide.
Suppose you have the following undocumented class:
You can add documentation to this class like so:
add_docs(T, cls_doc="A docstring for the class.",
foo="The foo method.",
bar="The bar method.")Now, docstrings will appear as expected:
test_eq(T.__doc__, "A docstring for the class.")
test_eq(T.foo.__doc__, "The foo method.")
test_eq(T.bar.__doc__, "The bar method.")add_docs also validates that all of your public methods contain a docstring. If one of your methods is not documented, it will raise an error:
class T:
def foo(self): pass
def bar(self): pass
f=lambda: add_docs(T, "A docstring for the class.", foo="The foo method.")
test_fail(f, contains="Missing docs")docs
def docs(
cls
):
Decorator version of add_docs, using _docs dict
Instead of using add_docs, you can use the decorator docs as shown below. Note that the docstring for the class can be set with the argument cls_doc:
@docs
class _T:
def f(self): pass
def g(cls): pass
_docs = dict(cls_doc="The class docstring",
f="The docstring for method f.",
g="A different docstring for method g.")
test_eq(_T.__doc__, "The class docstring")
test_eq(_T.f.__doc__, "The docstring for method f.")
test_eq(_T.g.__doc__, "A different docstring for method g.")For either the docs decorator or the add_docs function, you can still define your docstrings in the normal way. Below we set the docstring for the class as usual, but define the method docstrings through the _docs attribute:
@docs
class _T:
"The class docstring"
def f(self): pass
_docs = dict(f="The docstring for method f.")
test_eq(_T.__doc__, "The class docstring")
test_eq(_T.f.__doc__, "The docstring for method f.")is_iter
def is_iter(
o
):
Test whether o can be used in a for loop
assert is_iter([1])
assert not is_iter(array(1))
assert is_iter(array([1,2]))
assert (o for o in range(3))coll_repr
def coll_repr(
c, max_n:int=250
):
String repr of up to max_n items of (possibly lazy) collection c
coll_repr is used to provide a more informative __repr__ about list-like objects. coll_repr and is used by L to build a __repr__ that displays the length of a list in addition to a preview of a list.
test_eq(coll_repr(range(1000),10), '(#1000) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9...]')
test_eq(coll_repr(range(1000), 5), '(#1000) [0, 1, 2, 3, 4...]')
test_eq(coll_repr(range(10), 5), '(#10) [0, 1, 2, 3, 4...]')
test_eq(coll_repr(range(5), 5), '[0, 1, 2, 3, 4]')is_bool
def is_bool(
x
):
Check whether x is a bool or None
mask2idxs
def mask2idxs(
mask
):
Convert bool mask or index list to index L
test_eq(mask2idxs([False,True,False,True]), [1,3])
test_eq(mask2idxs(array([False,True,False,True])), [1,3])
test_eq(mask2idxs(array([1,2,3])), [1,2,3])cycle
def cycle(
o
):
Like itertools.cycle except creates list of Nones if o is empty
test_eq(itertools.islice(cycle([1,2,3]),5), [1,2,3,1,2])
test_eq(itertools.islice(cycle([]),3), [None]*3)
test_eq(itertools.islice(cycle(None),3), [None]*3)
test_eq(itertools.islice(cycle(1),3), [1,1,1])zip_cycle
def zip_cycle(
x, args:VAR_POSITIONAL
):
Like itertools.zip_longest but cycles through elements of all but first argument
test_eq(zip_cycle([1,2,3,4],list('abc')), [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'a')])is_indexer
def is_indexer(
idx
):
Test whether idx will index a single item in a list
You can, for example index a single item in a list with an integer or a 0-dimensional numpy array:
assert is_indexer(1)
assert is_indexer(np.array(1))However, you cannot index into single item in a list with another list or a numpy array with ndim > 0.
assert not is_indexer([1, 2])
assert not is_indexer(np.array([[1, 2], [3, 4]]))product
def product(
xs
):
The product of elements of xs, with Nones removed
product([None, 3, 4, 5])60
product([])1
sum([])0
flatmap
flatmap
def flatmap(
f, xs
):
Apply f to each element and flatten the results into a single list.
flatmap is a fundamental operation in functional programming that combines mapping and flattening into a single step. Where map applies a function to each element and returns a list of results, flatmap goes further: it expects the function to return a sequence for each element, then concatenates all those sequences into one flat list, which is useful for operations where each input naturally produces zero, one, or many outputs.
flatmap(f, xs) is just a named abstraction for the list comprehension [y for x in xs for y in f(x)]. Giving it a name makes the intent clearer and the code more readable.
flatmap(range, range(4))[0, 0, 1, 0, 1, 2]
Compare map (which nests results) with flatmap (which flattens them):
list(map(str.split, ["hello world", "foo bar"])) # nested[['hello', 'world'], ['foo', 'bar']]
flatmap(str.split, ["hello world", "flatmap rocks"])['hello', 'world', 'flatmap', 'rocks']
Common use cases include: parsing structured text (splitting lines into words), expanding nested data (extracting all emails from a list of contacts), filtering with transformation (keeping and transforming only valid items), and traversing hierarchies (listing files across multiple directories). The pattern elegantly handles “optional” results too—return an empty list to skip an item, or a single-element list to include it. This avoids the nested lists you’d get from map followed by a separate flatten, and expresses the intent more directly. Below we show a few examples.
Parse CSV-like lines into all values:
flatmap(Self.split(','), ["a,b,c", "d,e"])['a', 'b', 'c', 'd', 'e']
Return [] to skip an item, [x] to keep it, or [x, y, ...] to expand it:
flatmap(lambda x: [x*10] if x else [], [1, 0, 2]) # skips zeros[10, 20]
dat = [{'emails': ['[email protected]','[email protected]']}, {'emails': []}, {'emails': ['[email protected]']}]
flatmap(Self['emails'], dat)All files in multiple directories:
flatmap(Self.iterdir(), [Path('files'), Path('images')])[Path('files/test.txt.bz2'),
Path('images/mnist3.png'),
Path('images/att_00000.png'),
Path('images/att_00005.png'),
Path('images/att_00007.png'),
Path('images/att_00006.png'),
Path('images/puppy.jpg')]
Pair each item with its factors:
def factpairs(n): return [(n,i) for i in range(1,n+1) if n%i==0]
flatmap(factpairs, [6,10])[(6, 1), (6, 2), (6, 3), (6, 6), (10, 1), (10, 2), (10, 5), (10, 10)]
L helpers
CollBase
def CollBase(
items
):
Base class for composing a list of items
ColBase is a base class that emulates the functionality of a python list:
class _T(CollBase): pass
l = _T([1,2,3,4,5])
test_eq(len(l), 5) # __len__
test_eq(l[-1], 5); test_eq(l[0], 1) #__getitem__
l[2] = 100; test_eq(l[2], 100) # __set_item__
del l[0]; test_eq(len(l), 4) # __delitem__
test_eq(str(l), '[2, 100, 4, 5]') # __repr__L
def L(
items:NoneType=None, rest:VAR_POSITIONAL, use_list:bool=False, match:NoneType=None
):
Behaves like a list of items but can also index with list of indices or masks
L is a drop in replacement for a python list. Inspired by NumPy, L, supports advanced indexing and has additional methods (outlined below) that provide additional functionality and encourage simple expressive code.
Examples and overview
from fastcore.utils import gtRead this overview section for a quick tutorial of L, as well as background on the name.
You can create an L from an existing iterable (e.g. a list, range, etc) and access or modify it with an int list/tuple index, mask, int, or slice. All list methods can also be used with L.
t = L(range(12))
test_eq(t, list(range(12)))
test_ne(t, list(range(11)))
t[3] = "h"
test_eq(t[3], "h")
t[3,5] = ("j","k")
test_eq(t[3,5], ["j","k"])
test_eq(t, L(t))
test_eq(L(L(1,2),[3,4]), ([1,2],[3,4]))
t[0:3] = [1, 2, 3]
test_eq(t[0:3], [1, 2, 3])
t[1, 2, 3, 'j', 4, 'k', 6, 7, 8, 9, 10, 11]
Any L is a Sequence so you can use it with methods like random.sample:
assert isinstance(t, Sequence)import randomrandom.seed(0)
random.sample(t, 3)[6, 11, 1]
There are optimized indexers for arrays, tensors, and DataFrames.
import pandas as pdarr = np.arange(9).reshape(3,3)
t = L(arr, use_list=None)
test_eq(t[1,2], arr[[1,2]])
df = pd.DataFrame({'a':[1,2,3]})
t = L(df, use_list=None)
test_eq(t[1,2], L(pd.DataFrame({'a':[2,3]}, index=[1,2]), use_list=None))You can also modify an L with append, +, and *.
t = L()
test_eq(t, [])
t.append(1)
test_eq(t, [1])
t += [3,2]
test_eq(t, [1,3,2])
t = t + [4]
test_eq(t, [1,3,2,4])
t = 5 + t
test_eq(t, [5,1,3,2,4])
test_eq(L(1,2,3), [1,2,3])
test_eq(L(1,2,3), L(1,2,3))
t = L(1)*5
test_eq(~L([True,False,False]), L([False,True,True]))An L can be constructed from anything iterable, although tensors and arrays will not be iterated over on construction, unless you pass use_list to the constructor.
test_eq(L([1,2,3]),[1,2,3])
test_eq(L(L([1,2,3])),[1,2,3])
test_ne(L([1,2,3]),[1,2,])
test_eq(L('abc'),['abc'])
test_eq(L(range(0,3)),[0,1,2])
test_eq(L(o for o in range(0,3)),[0,1,2])
test_eq(L(array(0)),[array(0)])
test_eq(L([array(0),array(1)]),[array(0),array(1)])
test_eq(L(array([0.,1.1]))[0],array([0.,1.1]))
test_eq(L(array([0.,1.1]), use_list=True), [array(0.),array(1.1)]) # `use_list=True` to unwrap arrays/arraysIf match is not None then the created list is same len as match, either by:
- If
len(items)==1thenitemsis replicated, - Otherwise an error is raised if
matchanditemsare not already the same size.
test_eq(L(1,match=[1,2,3]),[1,1,1])
test_eq(L([1,2],match=[2,3]),[1,2])
test_fail(lambda: L([1,2],match=[1,2,3]))If you create an L from an existing L then you’ll get back the original object (since L uses the NewChkMeta metaclass).
test_is(L(t), t)An L is considred equal to a list if they have the same elements. It’s never considered equal to a str a set or a dict even if they have the same elements/keys.
test_eq(L(['a', 'b']), ['a', 'b'])
test_ne(L(['a', 'b']), 'ab')
test_ne(L(['a', 'b']), {'a':1, 'b':2})L Methods
L.__getitem__
def __getitem__(
idx
):
Retrieve idx (can be list of indices, or mask, or int) items
t = L(range(12))
test_eq(t[1,2], [1,2]) # implicit tuple
test_eq(t[[1,2]], [1,2]) # list
test_eq(t[:3], [0,1,2]) # slice
test_eq(t[[False]*11 + [True]], [11]) # mask
test_eq(t[array(3)], 3)L.__setitem__
def __setitem__(
idx, o
):
Set idx (can be list of indices, or mask, or int) items to o (which is broadcast if not iterable)
t[4,6] = 0
test_eq(t[4,6], [0,0])
t[4,6] = [1,2]
test_eq(t[4,6], [1,2])L.unique
def unique(
sort:bool=False, bidir:bool=False, start:NoneType=None
):
Unique items, in stable order
test_eq(L(4,1,2,3,4,4).unique(), [4,1,2,3])L.val2idx
def val2idx(
):
Dict from value to index
test_eq(L(1,2,3).val2idx(), {3:2,1:0,2:1})L.range
def range(
a, b:NoneType=None, step:NoneType=None
):
Class Method: Same as range, but returns L. Can pass collection for a, to use len(a)
test_eq_type(L.range([1,1,1]), L(range(3)))
test_eq_type(L.range(5,2,2), L(range(5,2,2)))L.enumerate
def enumerate(
):
Same as enumerate
test_eq(L('a','b','c').enumerate(), [(0,'a'),(1,'b'),(2,'c')])L.renumerate
def renumerate(
):
Same as renumerate
test_eq(L('a','b','c').renumerate(), [('a', 0), ('b', 1), ('c', 2)])L.split
def split(
s, sep:NoneType=None, maxsplit:int=-1
):
Class Method: Same as str.split, but returns an L
L.split is a class method that works like str.split, but returns an L instead of a list:
test_eq(L.split('a b c'), ['a','b','c'])
test_eq(L.split('a-b-c', '-'), ['a','b','c'])
test_eq(L.split('a-b-c', '-', maxsplit=1), ['a','b-c'])L.splitlines
def splitlines(
s, keepends:bool=False
):
Class Method: Same as str.splitlines, but returns an L
L.splitlines is a class method that works like str.splitlines, but returns an L instead of a list:
test_eq(L.splitlines('a\nb\nc'), ['a','b','c'])
test_eq(L.splitlines('a\nb\nc', keepends=True), ['a\n','b\n','c'])curryable
def curryable(
f
):
The curryable decorator enables a powerful pattern: methods decorated with it can be called either as instance methods (the normal way) or as class methods that return a partial function.
For instance, consider processing nested data structures. Without curryable, you’d write:
L(lines).map(lambda x: L(x).map(int))With curryable, you can write:
L(lines).map(L.map(int))When you call L.map(int) on the class (not an instance), the decorator returns a functools.partial that waits for an iterable to be passed in later.
This pattern is especially valuable for data parsing pipelines where you’re frequently mapping transformations over nested structures. The curried form reads more naturally and composes well with other curried functions like splitter() and linesplitter().
map
def map(
f, args:VAR_POSITIONAL, kwargs:VAR_KEYWORD
):
Create new L with f applied to all items, passing args and kwargs to f
test_eq(L.range(4).map(operator.neg), [0,-1,-2,-3])If f is a string then it is treated as a format string to create the mapping:
test_eq(L.range(4).map('#{}#'), ['#0#','#1#','#2#','#3#'])If f is a dictionary (or anything supporting __getitem__) then it is indexed to create the mapping:
test_eq(L.range(4).map(list('abcd')), list('abcd'))You can also pass the same arg params that bind accepts:
def f(a=None,b=None): return b
test_eq(L.range(4).map(f, b=arg0), range(4))splitter
def splitter(
sep:NoneType=None, maxsplit:int=-1
):
Create a partial function that splits strings into L
A curried version of L.split, useful for mapping over collections of strings. For instance to split some lines with the same separator:
data = '''1,2,3
4,5,6
7,8,9'''
grid = L.splitlines(data).map(splitter(','))
grid[['1', '2', '3'], ['4', '5', '6'], ['7', '8', '9']]
As mentioned in the curryable discussion, map can be curried. This can work well together with L.splitlines output:
intgrid = grid.map(L.map(int))
intgrid[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
Although in this particular example numpy has a useful shortcut:
np.genfromtxt(data.splitlines(), delimiter=',', dtype=int)array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
linesplitter
def linesplitter(
keepends:bool=False
):
Create a partial function that splits strings by lines into L
A curried version of L.splitlines, useful for splitting multi-line strings into Ls when mapping over a collection.
L(['a\nb\nc', 'd\ne']).map(linesplitter())[['a', 'b', 'c'], ['d', 'e']]
groupby
def groupby(
key, val:function=<function noop at 0x7fa9eb7a2c20>
):
Same as fastcore.basics.groupby
words = L.split('aaa abc bba')
test_eq(words.groupby(0, (1,2)), {'a':[('a','a'),('b','c')], 'b':[('b','a')]})L.groupby can also be used in curried form, which is useful when you need to apply the same grouping operation across multiple collections.
L([['a1','b2','a3'], ['x1','y2','x3']]).map(L.groupby(0))[{'a': ['a1', 'a3'], 'b': ['b2']}, {'x': ['x1', 'x3'], 'y': ['y2']}]
starmap
def starmap(
f, args:VAR_POSITIONAL, kwargs:VAR_KEYWORD
):
Like map, but use itertools.starmap
L.starmap applies a function to each element, unpacking tuples as arguments:
test_eq(L([(1,2),(3,4)]).starmap(operator.add), [3,7])
test_eq(L([(1,2,3),(4,5,6)]).starmap(lambda a,b,c: a+b*c), [7,34])The curried form of L.starmap is useful when you need to apply the same starmap operation across nested structures. For example, when you have a list of lists of tuples and want to apply a function that unpacks each tuple:
nested = L([[(1,2),(3,4)], [(5,6),(7,8)]])
nested.map(L.starmap(operator.mul))[[2, 12], [30, 56]]
rstarmap
def rstarmap(
f, args:VAR_POSITIONAL, kwargs:VAR_KEYWORD
):
Like starmap, but reverse the order of args
L.rstarmap is like starmap, but reverses the order of unpacked arguments:
test_eq(L((1,2),(3,4)).rstarmap(operator.sub), [1,1]) # 2-1, 4-3
test_eq(L(('a','b'),('c','d')).rstarmap('{}{}'.format), ['ba','dc'])The curried form of L.rstarmap is useful when mapping over nested structures where you need reversed argument order. This commonly occurs when processing pairs where the second element should be the first argument to a function:
nested = L([[('x',1),('y',2)], [('z',3)]])
nested.map(L.rstarmap('{}{}'.format))[['1x', '2y'], ['3z']]
L.map_dict
def map_dict(
f:function=<function noop at 0x7fa9eb7a2c20>, args:VAR_POSITIONAL, kwargs:VAR_KEYWORD
):
Like map, but creates a dict from items to function results
test_eq(L(range(1,5)).map_dict(), {1:1, 2:2, 3:3, 4:4})
test_eq(L(range(1,5)).map_dict(operator.neg), {1:-1, 2:-2, 3:-3, 4:-4})L.zip
def zip(
cycled:bool=False
):
Create new L with zip(*items)
t = L([[1,2,3],'abc'])
test_eq(t.zip(), [(1, 'a'),(2, 'b'),(3, 'c')])t = L([[1,2,3,4],['a','b','c']])
test_eq(t.zip(cycled=True ), [(1, 'a'),(2, 'b'),(3, 'c'),(4, 'a')])
test_eq(t.zip(cycled=False), [(1, 'a'),(2, 'b'),(3, 'c')])L.map_zip
def map_zip(
f, args:VAR_POSITIONAL, cycled:bool=False, kwargs:VAR_KEYWORD
):
Combine zip and starmap
t = L([1,2,3],[2,3,4])
test_eq(t.map_zip(operator.mul), [2,6,12])L.zipwith
def zipwith(
rest:VAR_POSITIONAL, cycled:bool=False
):
Create new L with self zip with each of *rest
b = [[0],[1],[2,2]]
t = L([1,2,3]).zipwith(b)
test_eq(t, [(1,[0]), (2,[1]), (3,[2,2])])L.map_zipwith
def map_zipwith(
f, rest:VAR_POSITIONAL, cycled:bool=False, kwargs:VAR_KEYWORD
):
Combine zipwith and starmap
test_eq(L(1,2,3).map_zipwith(operator.mul, [2,3,4]), [2,6,12])filter
def filter(
f:function=<function noop at 0x7fa9eb7a2c20>, negate:bool=False, kwargs:VAR_KEYWORD
):
Create new L filtered by predicate f, passing args and kwargs to f
t = L(range(12))
test_eq(t.filter(lambda o:o<5), [0,1,2,3,4])
test_eq(t.filter(lambda o:o<5, negate=True), [5,6,7,8,9,10,11])L.filter can be used as a curried class method, returning a partial that filters any iterable and wraps the result in an L. This is useful when mapping a filter operation over nested collections.
intgrid[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
intgrid.map(L.filter(ge(5)))[[], [5, 6], [7, 8, 9]]
starfilter
def starfilter(
f, negate:bool=False, kwargs:VAR_KEYWORD
):
Like filter, but unpacks elements as args to f
L.starfilter is like filter, but unpacks tuple elements as arguments to the predicate:
test_eq(L((1,2),(3,1),(2,3)).starfilter(lt), [(1,2),(2,3)])
test_eq(L((1,2),(3,1),(2,3)).starfilter(lt, negate=True), [(3,1)])Curried L.starfilter is useful when mapping a starfilter operation over nested collections—each inner collection gets filtered by unpacking its tuples as arguments to the predicate, eg to filter pairs where first < second, across multiple lists of pairs:
nested = L([[(1,5),(3,2)], [(4,6),(9,1)]])
nested.map(L.starfilter(lt))[[(1, 5)], [(4, 6)]]
rstarfilter
def rstarfilter(
f, negate:bool=False, kwargs:VAR_KEYWORD
):
Like starfilter, but reverse the order of args
L.rstarfilter is like starfilter, but reverses the order of unpacked arguments (and can also be curried):
test_eq(L((2,1),(1,3),(3,2)).rstarfilter(lt), [(2,1),(3,2)]) # 1<2, 3<1 fails, 2<3
test_eq(L((2,1),(1,3),(3,2)).rstarfilter(lt, negate=True), [(1,3)])argwhere
def argwhere(
f, negate:bool=False, kwargs:VAR_KEYWORD
):
Like filter, but return indices for matching items
t = L([0,1,2,3,4,99,0])
test_eq(t.argwhere(lambda o:o<5), [0,1,2,3,4,6])starargwhere
def starargwhere(
f, negate:bool=False
):
Like argwhere, but unpacks elements as args to f
L.starargwhere is like argwhere, but unpacks tuple elements as arguments to the predicate (it is also curryable):
test_eq(L((1,2),(3,1),(2,3)).starargwhere(lt), [0,2])
test_eq(L((1,2),(3,1),(2,3)).starargwhere(lt, negate=True), [1])rstarargwhere
def rstarargwhere(
f, negate:bool=False
):
Like starargwhere, but reverse the order of args
L.rstarargwhere is like starargwhere, but reverses the order of unpacked arguments (it is also curryable):
test_eq(L((2,1),(1,3),(3,2)).rstarargwhere(lt), [0,2]) # 1<2, 3<1 fails, 2<3
test_eq(L((2,1),(1,3),(3,2)).rstarargwhere(lt, negate=True), [1])argfirst
def argfirst(
f, negate:bool=False
):
Return index of first matching item
test_eq(t.argfirst(lambda o:o>4), 5)
test_eq(t.argfirst(lambda o:o>4,negate=True),0)Curried L.argfirst returns a partial function that finds the index of the first matching item in any iterable. This is useful when mapping over nested collections to find the first match in each.
nested = L([[1,2,8,4], [5,9,7], [1,1,1]])
nested.map(L.argfirst(gt(5)))[2, 1, None]
starargfirst
def starargfirst(
f, negate:bool=False
):
Like argfirst, but unpacks elements as args to f
L.starargfirst is like argfirst, but unpacks tuple elements as arguments to the predicate (and is curryable):
test_eq(L((3,1),(1,2),(2,3)).starargfirst(lt), 1)
test_eq(L((1,2),(3,1),(2,3)).starargfirst(lt, negate=True), 1)rstarargfirst
def rstarargfirst(
f, negate:bool=False
):
Like starargfirst, but reverse the order of args
L.rstarargfirst is like starargfirst, but reverses the order of unpacked arguments (and is curryable):
test_eq(L((1,3),(2,1),(3,2)).rstarargfirst(lt), 1) # 3<1 fails, 1<2
test_eq(L((2,1),(1,3),(3,2)).rstarargfirst(lt, negate=True), 1)L.itemgot
def itemgot(
idxs:VAR_POSITIONAL
):
Create new L with item idx of all items
t = L([['x', [0]], ['y', [1]], ['z', [2,2]]])
test_eq(t.itemgot(1), b)L.attrgot
def attrgot(
k, default:NoneType=None
):
Create new L with attr k (or value k for dicts) of all items.
# Example when items are not a dict
a = [SimpleNamespace(a=3,b=4),SimpleNamespace(a=1,b=2)]
test_eq(L(a).attrgot('b'), [4,2])
#Example of when items are a dict
b =[{'id': 15, 'name': 'nbdev'}, {'id': 17, 'name': 'fastcore'}]
test_eq(L(b).attrgot('id'), [15, 17])sorted
def sorted(
key:NoneType=None, reverse:bool=False, cmp:NoneType=None, kwargs:VAR_KEYWORD
):
New L sorted by key, using sort_ex. If key is str use attrgetter; if int use itemgetter
test_eq(L(a).sorted('a').attrgot('b'), [2,4])Curried L.sorted returns a partial function that sorts any iterable by the given key. This is useful when mapping a sort operation over nested collections—each inner collection gets sorted independently using the same key.
nested = L([[(3,'c'),(1,'a'),(2,'b')], [(6,'f'),(4,'d')]])
nested.map(L.sorted(0))[[(1, 'a'), (2, 'b'), (3, 'c')], [(4, 'd'), (6, 'f')]]
starsorted
def starsorted(
key, reverse:bool=False
):
Like sorted, but unpacks elements as args to key
L.starsorted is like sorted, but unpacks tuple elements as arguments to the key function:
test_eq(L((3,1),(1,2),(2,0)).starsorted(operator.sub), [(1,2),(3,1),(2,0)]) # sorted by a-b: 2, 2, -1
test_eq(L((1,2),(3,1),(2,3)).starsorted(operator.add), [(1,2),(3,1),(2,3)]) # sorted by a+b: 3, 4, 5rstarsorted
def rstarsorted(
key, reverse:bool=False
):
Like starsorted, but reverse the order of args
L.rstarsorted is like starsorted, but reverses the order of unpacked arguments:
test_eq(L((1,3),(2,1),(0,2)).rstarsorted(operator.sub), [(2,1),(1,3),(0,2)]) # sorted by b-a: 0, 2, 2
test_eq(L((2,1),(1,3),(3,2)).rstarsorted(operator.sub), [(2,1),(3,2),(1,3)]) # sorted by b-a: -1, -1, 2L.concat
def concat(
):
Concatenate all elements of list
test_eq(L([0,1,2,3],4,L(5,6)).concat(), range(7))L.copy
def copy(
):
Same as list.copy, but returns an L
t = L([0,1,2,3],4,L(5,6)).copy()
test_eq(t.concat(), range(7))L.shuffle
def shuffle(
):
Same as random.shuffle, but not inplace
L.shuffle returns a new shuffled L, leaving the original unchanged:
t = L(1,2,3,4,5)
s = t.shuffle()
test_eq(set(s), set(t)) # same elements
test_eq(t, [1,2,3,4,5]) # original unchangedreduce
def reduce(
f, initial:NoneType=None
):
Wrapper for functools.reduce
test_eq(L(1,2,3,4).reduce(operator.add), 10)
test_eq(L(1,2,3,4).reduce(operator.mul, 10), 240)Curried L.reduce returns a partial function that reduces any iterable using the given function. This is useful when mapping a reduction over nested collections—each inner collection gets reduced independently using the same operation.
nested = L([[1,2,3], [4,5], [6,7,8,9]])
nested.map(L.reduce(operator.add))[6, 9, 30]
starreduce
def starreduce(
f, initial:NoneType=None
):
Like reduce, but unpacks elements as args to f
L.starreduce is like reduce, but unpacks tuple elements as additional arguments to f (after accumulator):
test_eq(L((1,2),(3,4),(5,6)).starreduce(lambda acc,a,b: acc+a*b, 0), 44) # 0+1*2+3*4+5*6
test_eq(L(('a',1),('b',2)).starreduce(lambda acc,k,v: {**acc, k:v}, {}), {'a':1,'b':2})E.g implement a dot product:
def dot(a,b): return a.zipwith(b).starreduce(lambda acc,a,b: acc+a*b, 0)
dot(L(1,3,5), L(2,4,6))44
rstarreduce
def rstarreduce(
f, initial:NoneType=None
):
Like starreduce, but reverse the order of unpacked args
L.rstarreduce is like starreduce, but reverses the order of unpacked arguments:
L.sum
def sum(
):
Sum of the items
test_eq(L(1,2,3,4).sum(), 10)
test_eq(L().sum(), 0)L.product
def product(
):
Product of the items
test_eq(L(1,2,3,4).product(), 24)
test_eq(L().product(), 1)L.map_first
def map_first(
f:function=<function noop at 0x7fa9eb7a2c20>, g:function=<function noop at 0x7fa9eb7a2c20>, args:VAR_POSITIONAL,
kwargs:VAR_KEYWORD
):
First element of map_filter
t = L(0,1,2,3)
test_eq(t.map_first(lambda o:o*2 if o>2 else None), 6)L.setattrs
def setattrs(
attr, val
):
Call setattr on all items
t = L(SimpleNamespace(),SimpleNamespace())
t.setattrs('foo', 'bar')
test_eq(t.attrgot('foo'), ['bar','bar'])L.flatmap
def flatmap(
f
):
Apply f to each element and flatten the results into a single L.
L.flatmap is the method version of the flatmap function, allowing you to call it directly on an L instance. It applies a function to each element and flattens the results into a single L. This is useful for operations where each input naturally produces zero, one, or many outputs.
test_eq(L("a,b,c", "d,e").flatmap(Self.split(',')), ['a', 'b', 'c', 'd', 'e'])As an alternative, you can just chain map and concat:
L("a,b,c", "d,e").map(Self.split(',')).concat()['a', 'b', 'c', 'd', 'e']
itertools wrappers
L.cycle
def cycle(
):
Same as itertools.cycle
L.cycle returns an infinite iterator that cycles through the elements:
test_eq(list(itertools.islice(L(1,2,3).cycle(), 7)), [1,2,3,1,2,3,1])takewhile
def takewhile(
f
):
Same as itertools.takewhile
L.takewhile returns elements from the beginning of the list while the predicate is true:
test_eq(L(1,2,3,4,5,1,2).takewhile(lambda x: x<4), [1,2,3])
test_eq(L(1,2,3,11).takewhile(lt(10)), [1,2,3])Curried L.takewhile returns a partial function that takes elements from the beginning of any iterable while the predicate holds. This is useful when mapping over nested collections—each inner collection gets truncated at the first failing element using the same predicate.
nested = L([[1,2,5,3], [2,3,8,1], [9,1,2]])
nested.map(L.takewhile(lt(5)))[[1, 2], [2, 3], []]
dropwhile
def dropwhile(
f
):
Same as itertools.dropwhile
L.dropwhile skips elements from the beginning while the predicate is true, then returns the rest:
test_eq(L(1,2,3,4,5,1,2).dropwhile(lt(4)), [4,5,1,2])
test_eq(L(1,2,3).dropwhile(lt(10)), [])startakewhile
def startakewhile(
f
):
Like takewhile, but unpacks elements as args to f
L.startakewhile is like takewhile, but unpacks tuple elements as arguments to the predicate:
test_eq(L((1,2),(2,3),(4,1),(5,6)).startakewhile(lambda a,b: a<b), [(1,2),(2,3)])
test_eq(L((1,10),(2,20),(5,3)).startakewhile(lt), [(1,10),(2,20)])nested = L([[(1,5),(2,6),(7,3)], [(0,1),(2,1),(3,9)]])
nested.map(L.startakewhile(lt))[[(1, 5), (2, 6)], [(0, 1)]]
rstartakewhile
def rstartakewhile(
f
):
Like startakewhile, but reverse the order of args
L.rstartakewhile is like startakewhile, but reverses the order of unpacked arguments:
test_eq(L((2,1),(3,2),(1,4),(6,5)).rstartakewhile(lt), [(2,1),(3,2)]) # 1<2, 2<3, 4<1 fails
test_eq(L((10,1),(20,2),(3,5)).rstartakewhile(lt), [(10,1),(20,2)]) # 1<10, 2<20, 5<3 failsstardropwhile
def stardropwhile(
f
):
Like dropwhile, but unpacks elements as args to f
L.stardropwhile is like dropwhile, but unpacks tuple elements as arguments to the predicate:
test_eq(L((1,2),(2,3),(4,1),(5,6)).stardropwhile(lambda a,b: a<b), [(4,1),(5,6)])
test_eq(L((1,10),(2,20),(5,3)).stardropwhile(lt), [(5,3)])rstardropwhile
def rstardropwhile(
f
):
Like stardropwhile, but reverse the order of args
L.rstardropwhile is like stardropwhile, but reverses the order of unpacked arguments:
test_eq(L((2,1),(3,2),(1,4),(6,5)).rstardropwhile(lt), [(1,4),(6,5)]) # 1<2, 2<3 pass, 4<1 fails
test_eq(L((10,1),(20,2),(3,5)).rstardropwhile(lt), [(3,5)])accumulate
def accumulate(
f:builtin_function_or_method=<built-in function add>, initial:NoneType=None
):
Same as itertools.accumulate
L.accumulate returns running totals (or running results of any binary function):
test_eq(L(1,2,3,4).accumulate(), [1,3,6,10])
test_eq(L(1,2,3,4).accumulate(operator.mul), [1,2,6,24])
test_eq(L(1,2,3).accumulate(initial=10), [10,11,13,16])Curried L.accumulate returns a partial function that computes running totals (or running results of any binary function) on any iterable. This is useful when mapping over nested collections—each inner collection gets its own running accumulation using the same function.
nested = L([[1,2,3], [4,5,6], [10,20]])
nested.map(L.accumulate(operator.mul))[[1, 2, 6], [4, 20, 120], [10, 200]]
L.pairwise
def pairwise(
):
Same as itertools.pairwise
L.pairwise returns consecutive overlapping pairs:
test_eq(L(1,2,3,4).pairwise(), [(1,2),(2,3),(3,4)])
test_eq(L(list('abcd')).pairwise(), [('a','b'),('b','c'),('c','d')])L.batched
def batched(
n
):
Same as itertools.batched (but also works on older Python versions
L.batched splits into chunks of size n:
test_eq(L(1,2,3,4,5).batched(2), [(1,2),(3,4),(5,)])
test_eq(L(list('abcdefg')).batched(3), [('a','b','c'),('d','e','f'),('g',)])L.compress
def compress(
selectors
):
Same as itertools.compress
L.compress filters elements using a boolean selector:
test_eq(L(list('abcd')).compress([1,0,1,0]), ['a','c'])
test_eq(L(1,2,3,4,5).compress([True,False,True,False,True]), [1,3,5])L.permutations
def permutations(
r:NoneType=None
):
Same as itertools.permutations
L.permutations returns all permutations of length r (defaults to full length):
test_eq(L(1,2,3).permutations(), [(1,2,3),(1,3,2),(2,1,3),(2,3,1),(3,1,2),(3,2,1)])
test_eq(L(list('abc')).permutations(2), [('a','b'),('a','c'),('b','a'),('b','c'),('c','a'),('c','b')])L.combinations
def combinations(
r
):
Same as itertools.combinations
L.combinations returns all combinations of length r:
test_eq(L(1,2,3,4).combinations(2), [(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)])
test_eq(L(list('abcd')).combinations(3), [('a','b','c'),('a','b','d'),('a','c','d'),('b','c','d')])partition
def partition(
f:function=<function noop at 0x7fa9eb7a2c20>, kwargs:VAR_KEYWORD
):
Split into two Ls based on predicate f: (true_items, false_items)
L.partition splits a list into two Ls based on a predicate—items where f returns true, and items where it returns false:
t,f = L(1,2,3,4,5,6).partition(lambda x: x%2==0)
test_eq(t, [2,4,6])
test_eq(f, [1,3,5])
evens,odds = L.range(10).partition(lambda x: x%2==0)
test_eq(evens, [0,2,4,6,8])
test_eq(odds, [1,3,5,7,9])Curried L.partition returns a partial function that splits any iterable into two Ls based on a predicate. This is useful when mapping over nested collections—each inner collection gets partitioned independently using the same predicate, returning a tuple of (true_items, false_items) for each.
nested = L([[1,2,3,4,5], [10,15,20,25], [3,6,9]])
nested.map(L.partition(gt(5)))[([], [1, 2, 3, 4, 5]), ([10, 15, 20, 25], []), ([6, 9], [3])]
starpartition
def starpartition(
f, kwargs:VAR_KEYWORD
):
Like partition, but unpacks elements as args to f
L.starpartition is like partition, but unpacks tuple elements as arguments to the predicate:
asc,desc = L((1,2),(3,1),(2,4),(5,3)).starpartition(lt)
test_eq(asc, [(1,2),(2,4)]) # a < b
test_eq(desc, [(3,1),(5,3)]) # a >= brstarpartition
def rstarpartition(
f, kwargs:VAR_KEYWORD
):
Like starpartition, but reverse the order of args
L.rstarpartition is like starpartition, but reverses the order of unpacked arguments:
asc,desc = L((2,1),(1,3),(4,2),(3,5)).rstarpartition(lt)
test_eq(asc, [(2,1),(4,2)]) # b < a (i.e., 1<2, 2<4)
test_eq(desc, [(1,3),(3,5)]) # b >= aL.flatten
def flatten(
):
Recursively flatten nested iterables (except strings)
L.flatten recursively flattens nested iterables into a single L. Strings are treated as atomic (not iterated over):
test_eq(L([[1,2],[3,[4,5]]]).flatten(), [1,2,3,4,5])
test_eq(L([1,[2,[3,[4]]]]).flatten(), [1,2,3,4])
test_eq(L(['a',['b','c'],'d']).flatten(), ['a','b','c','d']) # strings not flattened
test_eq(L([1,2,3]).flatten(), [1,2,3]) # already flatConfig
save_config_file
def save_config_file(
file, d, kwargs:VAR_KEYWORD
):
Write settings dict to a new config file, or overwrite the existing one.
read_config_file
def read_config_file(
file, kwargs:VAR_KEYWORD
):
Config files are saved and read using Python’s configparser.ConfigParser, inside the DEFAULT section.
_d = dict(user='fastai', lib_name='fastcore', some_path='test', some_bool=True, some_num=3)
try:
save_config_file('tmp.ini', _d)
res = read_config_file('tmp.ini')
finally: os.unlink('tmp.ini')
dict(res){'user': 'fastai',
'lib_name': 'fastcore',
'some_path': 'test',
'some_bool': 'True',
'some_num': '3'}
Config
def Config(
cfg_path, cfg_name, create:NoneType=None, save:bool=True, extra_files:NoneType=None, types:NoneType=None,
cfg_kwargs:VAR_KEYWORD
):
Reading and writing ConfigParser ini files
Config is a convenient wrapper around ConfigParser ini files with a single section (DEFAULT).
Instantiate a Config from an ini file at cfg_path/cfg_name:
save_config_file('../tmp.ini', _d)
try: cfg = Config('..', 'tmp.ini')
finally: os.unlink('../tmp.ini')
cfg{'user': 'fastai', 'lib_name': 'fastcore', 'some_path': 'test', 'some_bool': 'True', 'some_num': '3'}
You can create a new file if one doesn’t exist by providing a create dict:
try: cfg = Config('..', 'tmp.ini', create=_d)
finally: os.unlink('../tmp.ini')
cfg{'user': 'fastai', 'lib_name': 'fastcore', 'some_path': 'test', 'some_bool': 'True', 'some_num': '3'}
If you additionally pass save=False, the Config will contain the items from create without writing a new file:
cfg = Config('..', 'tmp.ini', create=_d, save=False)
test_eq(cfg.user,'fastai')
assert not Path('../tmp.ini').exists()You can also pass in ConfigParser kwargs to change the behavior of how your configuration file will be parsed. For example, by default, inline comments are not handled by Config. However, if you pass in the inline_comment_prefixes with whatever your comment symbol is, you’ll overwrite this behavior.
# Create a complete example config file with comments
cfg_str = """\
[DEFAULT]
user = fastai # inline comment
# Library configuration
lib_name = fastcore
# Paths
some_path = test
# Feature flags
some_bool = True
# Numeric settings
some_num = # missing value
"""
with open('../tmp.ini', 'w') as f:
f.write(cfg_str)# Now read it back to verify
try: cfg = Config('..', 'tmp.ini', inline_comment_prefixes=('#'))
finally: os.unlink('../tmp.ini')
test_eq(cfg.user,'fastai')
test_eq(cfg.some_num,'')Config.get
def get(
k, default:NoneType=None
):
Keys can be accessed as attributes, items, or with get and an optional default:
test_eq(cfg.user,'fastai')
test_eq(cfg['some_path'], 'test')
test_eq(cfg.get('foo','bar'),'bar')Extra files can be read before cfg_path/cfg_name using extra_files, in the order they appear:
with tempfile.TemporaryDirectory() as d:
a = Config(d, 'a.ini', {'a':0,'b':0})
b = Config(d, 'b.ini', {'a':1,'c':0})
c = Config(d, 'c.ini', {'a':2,'d':0}, extra_files=[a.config_file,b.config_file])
test_eq(c.d, {'a':'2','b':'0','c':'0','d':'0'})If you pass a dict types, then the values of that dict will be used as types to instantiate all values returned. Path is a special case – in that case, the path returned will be relative to the path containing the config file (assuming the value is relative). bool types use str2bool to convert to boolean.
_types = dict(some_path=Path, some_bool=bool, some_num=int)
cfg = Config('..', 'tmp.ini', create=_d, save=False, types=_types)
test_eq(cfg.user,'fastai')
test_eq(cfg['some_path'].resolve(), (Path('..')/'test').resolve())
test_eq(cfg.get('some_num'), 3)Config.find
def find(
cfg_name, cfg_path:NoneType=None, kwargs:VAR_KEYWORD
):
Search cfg_path and its parents to find cfg_name
You can use Config.find to search subdirectories for a config file, starting in the current path if no path is specified:
Config.find('settings.ini').repo'fastcore'