Python Testing with pytest by Brian Okken (z-lib.org)

До загрузки: 30 сек.



Благодарим, что скачиваете у нас :)

Если, что - то:

  • Поделится ссылкой:
  • Документ найден в свободном доступе.
  • Загрузка документа - бесплатна.
  • Если нарушены ваши права, свяжитесь с нами.
Формат: pdf
Найдено: 19.07.2020
Добавлено: 30.09.2020
Размер: 3.08 Мб

P yth on T estin g w it h p yte st
S im ple , R ap id , E ffe ctiv e, a n d S ca la b le
b y B ria n O kken
V ers io n: P 1.0 ( S ep te m ber 2 017)
C opyrig ht © 2 017 T he P ra g m atic P ro gra m mers , L LC .
This b ook i s l ic en se d t o
th e i n div id ual
w ho p urc h ase d i t. W e d on't c o py-p ro te ct i t
becau se t h at w ould l im it y our a b ility t o u se i t f o r y our
o w n p urp ose s. P le ase d on't b re ak t h is t r u st— you c an u se
th is a cro ss a ll o f y our d ev ic es b ut
p le ase d o n ot s h are t h is c o py
with o th er m em bers o f y our t e am , w ith f rie n ds, o r v ia
file s h arin g
s e rv ic es.
Than ks.
M an y o f t h e d esig natio ns u se d b y m an ufa ctu re rs a n d
se lle rs t o d is tin guis h t h eir p ro ducts a re
c la im ed a s
tr a d em ark s. W here t h ose d esig natio ns a p pear i n t h is b ook,
an d T he P ra g m atic
P ro gra m mers , L LC w as a w are o f a
tr a d em ark c la im , t h e d esig natio ns h av e b een p rin te d i n
in itia l
c ap ita l l e tte rs o r i n a ll c ap ita ls . T he P ra g m atic
Sta rte r K it, T he P ra g m atic P ro gra m mer, P ra g m atic
P ro gra m min g, P ra g m atic B ooksh elf a n d t h e l in kin g
g dev ic e a re t r a d em ark s o f T he P ra g m atic
P ro gra m mers ,
LLC .
E very p re cau tio n w as t a k en i n t h e p re p ara tio n o f t h is b ook.
How ev er, t h e p ublis h er a ssu m es n o
r e sp onsib ility f o r e rro rs
or o m is sio ns, o r f o r d am ag es t h at m ay r e su lt f ro m t h e u se
of i n fo rm atio n
( in clu din g p ro gra m l is tin gs) c o nta in ed
here in .

A bou t t h e P ra gm atic B ook sh elf
T he P ra g m atic B ooksh elf i s a n a g ile p ublis h in g c o m pan y.
We’re h ere b ecau se w e w an t t o
i m pro ve t h e l iv es o f d ev elo pers .
We d o t h is b y c re atin g t im ely , p ra ctic al t itle s, w ritte n b y
p ro gra m mers f o r p ro gra m mers .
O ur P ra g m atic c o urs e s, w ork sh ops, a n d o th er p ro ducts c an
help y ou a n d y our t e am c re ate b ette r
s o ftw are a n d h av e m ore
fu n. F or m ore i n fo rm atio n, a s w ell a s t h e l a te st P ra g m atic
title s, p le ase
v is it u s a t
http ://p ra g pro g.c o m
.
O ur e b ooks d o n ot c o nta in a n y D ig ita l R estr ic tio ns
Man ag em en t, a n d h av e a lw ay s b een D RM -
f re e. W e p io neere d t h e
beta b ook c o ncep t, w here y ou c an p urc h ase a n d r e ad a b ook
while i t’ s
s till b ein g w ritte n , a n d p ro vid e f e ed back t o t h e
au th or t o h elp m ak e a b ette r b ook f o r e v ery one.
F re e
re so urc es f o r a ll p urc h ase rs i n clu de s o urc e c o de d ow nlo ad s
(if a p plic ab le ), e rra ta a n d
d is c u ssio n f o ru m s, a ll
av aila b le o n t h e b ook's h om e p ag e a t p ra g pro g.c o m . W e’re
here t o m ak e
y our l if e e asie r.
N ew B ook A nnou ncem en ts
W an t t o k eep u p o n o ur l a te st t itle s a n d a n nouncem en ts , a n d
occasio nal s p ecia l o ffe rs ? J u st
c re ate a n a cco unt o n
pra g pro g.c o m
( a n e m ail a d dre ss a n d a p assw ord i s a ll i t t a k es)
an d s e le ct t h e
c h eck box t o r e ceiv e n ew sle tte rs . Y ou c an
als o f o llo w u s o n t w itte r a s @ pra g pro g.
A bou t E book F orm ats
I f y ou b uy d ir e ctly f ro m
pra g pro g.c o m
, y ou g et
eb ooks i n a ll a v aila b le f o rm ats f o r o ne p ric e.
Y ou c an
sy nch y our e b ooks a m ongst a ll y our d ev ic es ( in clu din g
iP hone/iP ad , A ndro id , l a p to ps,
e tc .) v ia D ro pbox.
You g et f re e u pdate s f o r t h e l if e o f t h e e d itio n. A nd, o f
co urs e , y ou c an a lw ay s
c o m e b ack a n d r e -d ow nlo ad y our b ooks
when n eed ed . E books b ought f ro m t h e A m azo n K in dle
s to re a re
su bje ct t o A m azo n's p olic es. L im ita tio ns i n A m azo n's f ile
fo rm at m ay c au se e b ooks t o
d is p la y d if fe re n tly o n d if fe re n t
dev ic es. F or m ore i n fo rm atio n, p le ase s e e o ur F A Q a t
p ra g pro g.c o m /f re q uen tly -a sk ed -q uestio ns/e b ooks
. T o l e arn
more a b out t h is b ook a n d a ccess t h e
f re e r e so urc es, g o t o
http s://p ra g pro g.c o m /b ook/b opyte st
, t h e b ook's h om ep ag e.
T han ks f o r y our c o ntin ued s u pport,
A ndy H unt
T he P ra g m atic P ro gra m mers
T he t e am t h at p ro duced t h is b ook i n clu des:
Andy H unt ( P ublis h er)
Ja n et F urlo w ( V P o f
O pera tio ns)
Kath arin e D vora k ( D ev elo pm en t E dito r)
Poto m ac I n dex in g, L LC ( In dex in g)
Nic o le
A bra m ow itz ( C opy E dito r)
Gils o n G ra p hic s ( L ay out)
F or c u sto m er s u pport, p le ase c o nta ct
su pport@ pra g pro g.c o m
.
F or i n te rn atio nal r ig hts , p le ase c o nta ct
rig hts @ pra g pro g.c o m
.

Table of Contents
1.
Acknowledgments
2. Preface
1. What Is pWHVW?
2. Learn pWHVW:KLOH7HVWLQJDQ([DPSOH$SSOLFDWLRn
3. How This Book Is Organized
4. What You Need to Know
5. Example Code and Online Resources
3. 1.
Getting Started with pWHVt
1. Getting pWHVt
2. Running pWHVt
3. Running Onl2QH7HVt
4. Using Options
5. Exercises
6. What’s Next
4. 2.
Writing Test Functions
1. Testing a Package
2. Using assert Statements
3. Expecting Exceptions
4. Marking Test Functions
5. Skipping Tests
6. Marking Tests as Expecting to Fail
7. Running a Subset of Tests
8. Parametrized Testing
9. Exercises
10. What’s Next
5. 3.
pWHVW)L[WXUHs
1. Sharing Fixtures Through conftest.py
2. Using Fixtures for Setup and Teardown
3. Tracing Fixture Execution with –setup-show
4. Using Fixtures for Test Data
5. Using Multiple Fixtures
6. SpecifLQJ)L[WXUH6FRSe
7. SpecifLQJ)L[WXUHVZLWKXVHIL[WXUHs
8. Using autouse for Fixtures That AlwaV*HW8VHd
9. Renaming Fixtures
10. Parametrizing Fixtures
11. Exercises
12. What’s Next
6. 4.
Builtin Fixtures
1. Using tmpdir and tmpdir_factory
2. Using pWHVWFRQILg
3. Using cache
4. Using capss
5. Using monkeSDWFh
6. Using doctest_namespace
7. Using recwarn

8.
Exercises
9. What’s Next 7.
5. Plugins
1. Finding Plugins
2. Installing Plugins
3. Writing Your Own Plugins
4. Creating an Installable Plugin
5. Testing Plugins
6. Creating a Distribution
7. Exercises
8. What’s Next
8. 6.
Configuration
1. Understanding pWHVW&RQILJXUDWLRQ)LOHs
2. Changing the Default Command-Line Options
3. Registering Markers to Avoid Marker TSRs
4. Requiring a Minimum pWHVW9HUVLRn
5. Stopping pWHVWIURP/RRNLQJLQWKH:URQJ3ODFHs
6. SpecifLQJ7HVW'LUHFWRU Locations
7. Changing Test Discover5XOHs
8. Disallowing XPASS
9. Avoiding Filename Collisions
10. Exercises
11. What’s Next
9. 7.
Using pWHVWZLWK2WKHU7RROs
1. pdb: Debugging Test Failures
2. Coverage.p'HWHUPLQLQJ+RZ0XFK&RGH,V7HVWHd
3. mock: Swapping Out Part of the SVWHm
4. tox: Testing Multiple Configurations
5. Jenkins CI: Automating Your Automated Tests
6. unittest: Running Legac7HVWVZLWKStest
7. Exercises
8. What’s Next
10. A1. Virtual Environments
11. A2. pip
12. A3. Plugin Sampler Pack
1.Plugins That Change the Normal Test Run Flow
2. Plugins That Alter or Enhance Output
3. Plugins for Static AnalVLs
4. Plugins for Web Development
13. A4. Packaging and Distributing PWKRQ3URMHFWs
1.Creating an Installable Module
2. Creating an Installable Package
3. Creating a Source Distribution and Wheel
4. Creating a P3,,QVWDOODEOH3DFNDJe
14. A5. xUnit Fixtures
1.SQWD[RI[8QLW)L[WXUHs
2. Mixing pWHVW)L[WXUHVDQG[8QLW)L[WXUHs
3. Limitations of xUnit Fixtures CopULJKWj7KH3UDJPDWLF%RRNVKHOI.

E arly p ra is e f o r
Pyth on T estin g w ith p yte st
I f o und
Pyth on T estin g w ith p yte st
t o b e a n e m in en tly u sa b le i n tr o ducto ry g uid eb ook t o t h e
p yte st t e stin g f ra m ew ork . I t i s a lr e ad y p ay in g d iv id en ds f o r m e a t m y c o m pan y.
→ Chris S hav er
VP o f P ro duct, U pris in g T ech nolo gy
S yste m atic s o ftw are t e stin g, e sp ecia lly i n t h e P yth on c o m munity , i s o fte n e ith er c o m ple te ly
o verlo oked o r d one i n a n a d h oc w ay . M an y P yth on p ro gra m mers a re c o m ple te ly u naw are o f t h e
e x is te n ce o f p yte st. B ria n O kken t a k es t h e t r o uble t o s h ow t h at s o ftw are t e stin g w ith p yte st i s
e asy , n atu ra l, a n d e v en e x citin g.
→ Dm itr y Z in ovie v
Auth or o f
Data S cie n ce E sse n tia ls i n P yth on
T his b ook i s t h e m is sin g c h ap te r a b se n t f ro m e v ery c o m pre h en siv e P yth on b ook.
→ Fra n k R uiz
Prin cip al S ite R elia b ility E ngin eer, B ox, I n c.

A ck now le d gm en ts
I f ir s t n eed t o t h an k M ic h elle , m y w if e a n d b est f rie n d. I w is h y ou c o uld s e e t h e r o om I g et t o
w rite i n . I n p la ce o f a d esk , I h av e a n a n tiq ue s q uare o ak d in in g t a b le t o g iv e m e p le n ty o f r o om
t o s p re ad o ut p ap ers . T here ’s a b eau tif u l g la ss-fro nt b ookcase w ith m y r e tr o s p ace t o ys t h at
w e’v e c o lle cte d o ver t h e y ears , a s w ell a s t e ch nic al b ooks, c ir c u it b oard s, a n d j u ggle b alls .
V in ta g e a lu m in um p ap er s to ra g e b in s a re s ta ck ed o n t o p w ith p la ces f o r n ote s, c o rd s, a n d e v en
l e fto ver b ook-p ro m otio n r o ck et s tic k ers . O ne w all i s c o vere d i n s o m e v elv et t h at w e p urc h ase d
y ears a g o w hen a f a b ric s to re w as g oin g o ut o f b usin ess. T he f a b ric i s t o q uie t t h e e ch oes w hen
I ’m r e co rd in g t h e p odcasts . I l o ve w ritin g h ere n ot j u st b ecau se i t’ s w onderfu l a n d r e fle cts m y
p ers o nality , b ut b ecau se i t’ s a s p ace t h at M ic h elle c re ate d w ith m e a n d f o r m e. S he a n d I h av e
a lw ay s b een a t e am , a n d s h e h as b een i n cre d ib ly s u pportiv e o f m y c ra zy i d eas t o w rite a b lo g,
s ta rt a p odcast o r t w o, a n d n ow , f o r t h e l a st y ear o r s o , w rite t h is b ook. S he h as m ad e s u re I ’v e
h ad t im e a n d s p ace f o r w ritin g. W hen I ’m t ir e d a n d d on’t t h in k I h av e t h e e n erg y t o w rite , s h e
t e lls m e t o j u st w rite f o r t w en ty m in ute s a n d s e e h ow I f e el t h en , j u st l ik e s h e d id w hen s h e
h elp ed m e g et t h ro ugh l a te n ig hts o f s tu dy i n c o lle g e. I r e ally , r e ally c o uld n’t d o t h is w ith out h er.
I a ls o h av e t w o a m azin gly a w eso m e, c u rio us, a n d b rillia n t d au ghte rs , G ab rie lla a n d S ophia , w ho
a re t w o o f m y b ig gest f a n s. E lla t e lls a n yone t a lk in g a b out p ro gra m min g t h at t h ey s h ould l is te n
t o m y p odcasts , a n d P hia s p orte d a T est & C ode s tic k er o n t h e b ack pack s h e t o ok t o s e co nd
g ra d e.
T here a re s o m an y m ore p eo ple t o t h an k.
M y e d ito r, K ath arin e D vora k , h elp ed m e s h ap e l o ts o f r a n dom i d eas a n d t o pic s i n to a c o hesiv e
p ro gre ssio n, a n d i s t h e r e aso n w hy t h is i s a b ook a n d n ot a s e rie s o f b lo g p osts s ta p le d t o geth er. I
e n te re d t h is p ro je ct a s a b lo gger, a n d a l ittle t o o a tta ch ed t o l o ts o f h ead in gs, s u bhead in gs, a n d
b ulle t p oin ts , a n d K atie p atie n tly g uid ed m e t o b e a b ette r w rite r.
T han k y ou t o S usa n nah D av id so n P fa lz er, A ndy H unt, a n d t h e r e st o f T he P ra g m atic B ooksh elf
f o r t a k in g a c h an ce o n m e.
T he t e ch nic al r e v ie w ers h av e k ep t m e h onest o n p yte st, b ut a ls o o n P yth on s ty le , a n d a re t h e
r e aso n w hy t h e c o de e x am ple s a re P E P 8 –co m plia n t. T han k y ou t o O liv er B estw alte r, F lo ria n
B ru hin , F lo ris B ru ynooghe, M ark G oody, P ete r H am pto n, D av e H unt, A l K rin ker, L okesh
K um ar M ak an i, B ru no O liv eir a , R onny P fa n nsc h m id t, R ap hael P ie rz in a, L ucia n o R am alh o,
F ra n k R uiz , a n d D m itr y Z in ovie v . M an y o n t h at l is t a re a ls o p yte st c o re d ev elo pers a n d/o r
m ain ta in ers o f i n cre d ib le p yte st p lu gin s.
I n eed t o c all o ut L ucia n o f o r a s p ecia l t h an k y ou. P artw ay t h ro ugh t h e w ritin g o f t h is b ook, t h e
f ir s t f o ur c h ap te rs w ere s e n t t o a h an dfu l o f r e v ie w ers . L ucia n o w as o ne o f t h em , a n d h is r e v ie w
w as t h e h ard est t o r e ad . I d on’t t h in k I f o llo w ed a ll o f h is a d vic e, b ut b ecau se o f h is f e ed back , I
r e -e x am in ed a n d r e w ro te m uch o f t h e f ir s t t h re e c h ap te rs a n d c h an ged t h e w ay I t h ought a b out
t h e r e st o f t h e b ook.
T han k y ou t o t h e e n tir e p yte st- d ev t e am f o r c re atin g s u ch a c o ol t e stin g t o ol. T han k y ou t o O liv er
B estw alte r, F lo ria n B ru hin , F lo ris B ru ynooghe, D av e H unt, H olg er K re k el, B ru no O liv eir a ,

Ronny P fa n nsc h m id t, R ap hael P ie rz in a, a n d m an y o th ers f o r a n sw erin g m y p yte st q uestio ns o ver
th e y ears .
Last b ut n ot l e ast, I n eed t o t h an k t h e p eo ple w ho h av e t h an ked m e. O ccasio nally p eo ple e m ail t o
le t m e k now h ow w hat I ’v e w ritte n s a v ed t h em t im e a n d m ad e t h eir j o bs e asie r. T hat’ s a w eso m e,
an d p le ase s m e t o n o e n d. T han k y ou.
Bria n O kken
Sep te m ber 2 017
Copyrig ht © 2 017, T he P ra g m atic B ooksh elf .

P refa ce
T he u se o f P yth on i s i n cre asin g n ot o nly i n s o ftw are d ev elo pm en t, b ut a ls o i n f ie ld s s u ch a s d ata
a n aly sis , r e se arc h s c ie n ce, t e st a n d m easu re m en t, a n d o th er i n dustr ie s. T he g ro w th o f P yth on i n
m an y c ritic al f ie ld s a ls o c o m es w ith t h e d esir e t o p ro perly , e ffe ctiv ely , a n d e ffic ie n tly p ut
s o ftw are t e sts i n p la ce t o m ak e s u re t h e p ro gra m s r u n c o rre ctly a n d p ro duce t h e c o rre ct r e su lts . I n
a d ditio n, m ore a n d m ore s o ftw are p ro je cts a re e m bra cin g c o ntin uous i n te g ra tio n a n d i n clu din g a n
a u to m ate d t e stin g p hase , a s r e le ase c y cle s a re s h orte n in g a n d t h oro ugh m an ual t e stin g o f
i n cre asin gly c o m ple x p ro je cts i s j u st i n fe asib le . T eam s n eed t o b e a b le t o t r u st t h e t e sts b ein g r u n
b y t h e c o ntin uous i n te g ra tio n s e rv ers t o t e ll t h em i f t h ey c an t r u st t h eir s o ftw are e n ough t o
r e le ase i t.
E nte r p yte st.

W hat I s p yte st?
A r o bust P yth on t e stin g t o ol, p yte st c an b e u se d f o r a ll t y pes a n d l e v els o f s o ftw are t e stin g. p yte st
c an b e u se d b y d ev elo pm en t t e am s, Q A t e am s, i n dep en den t t e stin g g ro ups, i n div id uals p ra ctic in g
T D D, a n d o pen s o urc e p ro je cts . I n f a ct, p ro je cts a ll o ver t h e I n te rn et h av e s w itc h ed f ro m u nitte st
o r n ose t o p yte st, i n clu din g M ozilla a n d D ro pbox. W hy? B ecau se p yte st o ffe rs p ow erfu l f e atu re s
s u ch a s ‘ a sse rt‘ r e w ritin g, a t h ir d -p arty p lu gin m odel, a n d a p ow erfu l y et s im ple f ix tu re m odel
t h at i s u nm atc h ed i n a n y o th er t e stin g f ra m ew ork .
p yte st i s a s o ftw are t e st f ra m ew ork , w hic h m ean s p yte st i s a c o m man d-lin e t o ol t h at
a u to m atic ally f in ds t e sts y ou’v e w ritte n , r u ns t h e t e sts , a n d r e p orts t h e r e su lts . I t h as a l ib ra ry o f
g oodie s t h at y ou c an u se i n y our t e sts t o h elp y ou t e st m ore e ffe ctiv ely . I t c an b e e x te n ded b y
w ritin g p lu gin s o r i n sta llin g t h ir d -p arty p lu gin s. I t c an b e u se d t o t e st P yth on d is tr ib utio ns. A nd i t
i n te g ra te s e asily w ith o th er t o ols l ik e c o ntin uous i n te g ra tio n a n d w eb a u to m atio n.
H ere a re a f e w o f t h e r e aso ns p yte st s ta n ds o ut a b ove m an y o th er t e st f ra m ew ork s:
Sim ple t e sts a re s im ple t o w rite i n p yte st.
Com ple x t e sts a re s till s im ple t o w rite .
Tests a re e asy t o r e ad .
Tests a re e asy t o r e ad . ( S o i m porta n t i t’ s l is te d t w ic e.)
You c an g et s ta rte d i n s e co nds.
You u se
asse rt
t o f a il a t e st, n ot t h in gs l ik e
se lf .a sse rtE qual( )
o r
se lf .a sse rtL essT han ()
. J u st
asse rt
.
You c an u se p yte st t o r u n t e sts w ritte n f o r u nitte st o r n ose .
p yte st i s b ein g a ctiv ely d ev elo ped a n d m ain ta in ed b y a p assio nate a n d g ro w in g c o m munity . I t’ s
s o e x te n sib le a n d f le x ib le t h at i t w ill e asily f it i n to y our w ork f lo w . A nd b ecau se i t’ s i n sta lle d
s e p ara te ly f ro m y our P yth on v ers io n, y ou c an u se t h e s a m e l a te st v ers io n o f p yte st o n l e g acy
P yth on 2 ( 2 .6 a n d a b ove) a n d P yth on 3 ( 3 .3 a n d a b ove).

Learn pWHVW:KLOH7HVWLQJDQ([DPSOH$SSOLFDWLRn
How would RXOLNHWROHDUQStest bWHVWLQJVLOO examples RXGQHYHUUXQDFURVVLQUHDOOLIH?
Me neither. We’re not going to do that in this book. Instead, we’re going to write tests against an
example project that I hope has manRIWKHVDPHWUDLWVRIDSSOLFDWLRQVou’ll be testing after Ru
read this book.
The Tasks Project
The application we’ll look at is called Tasks. Tasks is a minimal task-tracking application with a
command-line user interface. It has enough in common with manRWKHUWpes of applications
that I hope RXFDQHDVLO see how the testing concepts RXOHDUQZKLOHGHYHORSLQJWHVWVDJDLQVt
Tasks are applicable to RXUSURMHFWVQRZDQGLQWKHIXWXUH.
While Tasks has a command-line interface (CLI), the CLI interacts with the rest of the code
through an application programming interface (API). The API is the interface where we’ll direct
most of our testing. The API interacts with a database control laHUZKLFKLQWHUDFWVZLWKa
document database—either MongoDB or Tin'%7KHWpe of database is configured at database
initialization.
Before we focus on the API, let’s look at tasks, the command-line tool that represents the user
interface for Tasks.
Here’s an example session:​ ​$ ​​tasks​​ ​​add​​ ​​'do something'​​ ​​--owner ​​ ​​Brian​
​ ​$ ​​tasks​​ ​​add​​ ​​'do something else'​
​ ​$ ​​tasks​​ ​​list​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​ 1 Brian False do something​
​ ​ 2 False do something else​
​ ​$ ​​tasks​​ ​​update ​​ ​​2​​ ​​--owner​​ ​​Brian​
​ ​$ ​​tasks​​ ​​list​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​ 1 Brian False do something​
​ ​ 2 Brian False do something else​
​ ​$ ​​tasks​​ ​​update ​​ ​​1​​ ​​--done​​ ​​True​
​ ​$ ​​tasks​​ ​​list​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​ 1 Brian True do something​
​ ​ 2 Brian False do something else​

​ ​$ ​​tasks​​ ​​delete​​ ​​1​
​ ​$ ​​tasks​​ ​​list​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​ 2 Brian False do something else​
​ ​$​
This isn’t the most sophisticated task-management application, but it’s complicated enough to
use it to explore testing.
Test Strategy
While pWHVWLVXVHIXOIRUXQLWWHVWLQJLQWHJUDWLRQWHVWLQJVstem or end-to-end testing, and
functional testing, the strategIRUWHVWLQJWKH7DVNVSURMHFWIRFXVHVSULPDULO on subcutaneous
functional testing. Following are some helpful definitions:
Unit test: A test that checks a small bit of code, like a function or a class, in isolation of the
rest of the sVWHP,FRQVLGHUWKHWHVWVLQ&KDSWHU ​
Getting Started with pytest ​, to be unit
tests run against the Tasks data structure.
Integration test: A test that checks a larger bit of the code, maEHVHYHUDOFODVVHVRUa
subsVWHP0RVWO it’s a label used for some test larger than a unit test, but smaller than a
sVWHPWHVW.
SVWHPWHVW HQGWRHQG $WHVWWKDWFKHFNVDOORIWKHVstem under test in an environment
as close to the end-user environment as possible.
Functional test: A test that checks a single bit of functionalitRIDVstem. A test that
checks how well we add or delete or update a task item in Tasks is a functional test.
Subcutaneous test: A test that doesn’t run against the final end-user interface, but against
an interface just below the surface. Since most of the tests in this book test against the API
laHUQRWWKH&/,WKH qualifDVVXEFXWDQHRXVWHVWV.

H ow T his B ook I s O rg an iz ed
I n C hap te r 1 ,
​Gettin g S ta rte d w ith p yte st
​, y ou’ll i n sta ll p yte st a n d g et i t r e ad y t o u se . Y ou’ll t h en
t a k e o ne p ie ce o f t h e T ask s p ro je ct— th e d ata s tr u ctu re r e p re se n tin g a s in gle t a sk ( a
nam ed tu ple
c alle d
Task
)— an d u se i t t o t e st e x am ple s. Y ou’ll l e arn h ow t o r u n p yte st w ith a h an dfu l o f t e st
f ile s. Y ou’ll l o ok a t m an y o f t h e p opula r a n d h ugely u se fu l c o m man d-lin e o ptio ns f o r p yte st,
s u ch a s b ein g a b le t o r e -ru n t e st f a ilu re s, s to p e x ecu tio n a fte r t h e f ir s t f a ilu re , c o ntr o l t h e s ta ck
t r a ce a n d t e st r u n v erb osity , a n d m uch m ore .
I n C hap te r 2 ,
​Writin g T est F unctio ns
​, y ou’ll i n sta ll T ask s l o cally u sin g
pip
a n d l o ok a t h ow t o
s tr u ctu re t e sts w ith in a P yth on p ro je ct. Y ou’ll d o t h is s o t h at y ou c an g et t o w ritin g t e sts a g ain st a
r e al a p plic atio n. A ll t h e e x am ple s i n t h is c h ap te r r u n t e sts a g ain st t h e i n sta lle d a p plic atio n,
i n clu din g w ritin g t o t h e d ata b ase . T he a ctu al t e st f u nctio ns a re t h e f o cu s o f t h is c h ap te r, a n d
y ou’ll l e arn h ow t o u se
asse rt
e ffe ctiv ely i n y our t e sts . Y ou’ll a ls o l e arn a b out m ark ers , a f e atu re
t h at a llo w s y ou t o m ark m an y t e sts t o b e r u n a t o ne t im e, m ark t e sts t o b e s k ip ped , o r t e ll p yte st
t h at w e a lr e ad y k now s o m e t e sts w ill f a il. A nd I ’ll c o ver h ow t o r u n j u st s o m e o f t h e t e sts , n ot
j u st w ith m ark ers , b ut b y s tr u ctu rin g o ur t e st c o de i n to d ir e cto rie s, m odule s, a n d c la sse s, a n d h ow
t o r u n t h ese s u bse ts o f t e sts .
N ot a ll o f y our t e st c o de g oes i n to t e st f u nctio ns. I n C hap te r 3 ,
​pyte st F ix tu re s
​, y ou’ll l e arn h ow
t o p ut t e st d ata i n to t e st f ix tu re s, a s w ell a s s e t u p a n d t e ar d ow n c o de. S ettin g u p s y ste m s ta te ( o r
s u bsy ste m o r u nit s ta te ) i s a n i m porta n t p art o f s o ftw are t e stin g. Y ou’ll e x plo re t h is a sp ect o f
p yte st f ix tu re s t o h elp g et t h e T ask s p ro je ct’ s d ata b ase i n itia liz ed a n d p re fille d w ith t e st d ata f o r
s o m e t e sts . F ix tu re s a re a n i n cre d ib ly p ow erfu l p art o f p yte st, a n d y ou’ll l e arn h ow t o u se t h em
e ffe ctiv ely t o f u rth er r e d uce t e st c o de d uplic atio n a n d h elp m ak e y our t e st c o de i n cre d ib ly
r e ad ab le a n d m ain ta in ab le . p yte st f ix tu re s a re a ls o p ara m etr iz ab le , s im ila r t o t e st f u nctio ns, a n d
y ou’ll u se t h is f e atu re t o b e a b le t o r u n a ll o f y our t e sts a g ain st b oth T in yD B a n d M ongoD B, t h e
d ata b ase b ack e n ds s u pporte d b y T ask s.
I n C hap te r 4 ,
​Builtin F ix tu re s
​, y ou w ill l o ok a t s o m e b uiltin f ix tu re s p ro vid ed o ut- o f-th e-b ox b y
p yte st. Y ou w ill l e arn h ow p yte st b uiltin f ix tu re s c an k eep t r a ck o f t e m pora ry d ir e cto rie s a n d
f ile s f o r y ou, h elp y ou t e st o utp ut f ro m y our c o de u nder t e st, u se m onkey p atc h es, c h eck f o r
w arn in gs, a n d m ore .
I n C hap te r 5 ,
​Plu gin s
​, y ou’ll l e arn h ow t o a d d c o m man d-lin e o ptio ns t o p yte st, a lte r t h e p yte st
o utp ut, a n d s h are p yte st c u sto m iz atio ns, i n clu din g f ix tu re s, w ith o th ers t h ro ugh w ritin g,
p ack ag in g, a n d d is tr ib utin g y our o w n p lu gin s. T he p lu gin w e d ev elo p i n t h is c h ap te r i s u se d t o
m ak e t h e t e st f a ilu re s w e s e e w hile t e stin g T ask s j u st a l ittle b it n ic er. Y ou’ll a ls o l o ok a t h ow t o
p ro perly t e st y our t e st p lu gin s. H ow ’s t h at f o r m eta ? A nd j u st i n c ase y ou’re n ot i n sp ir e d e n ough
b y t h is c h ap te r t o w rite s o m e p lu gin s o f y our o w n, I ’v e h an d-p ic k ed a b unch o f g re at p lu gin s t o
s h ow o ff w hat’ s p ossib le i n A ppen dix 3 ,
​Plu gin S am ple r P ack
​.
S peak in g o f c u sto m iz atio n, i n C hap te r 6 ,
​Config ura tio n
​, y ou’ll l e arn h ow y ou c an c u sto m iz e
h ow p yte st r u ns b y d efa u lt f o r y our p ro je ct w ith c o nfig ura tio n f ile s. W ith a
pyte st.i n i
f ile , y ou
c an d o t h in gs l ik e s to re c o m man d-lin e o ptio ns s o y ou d on’t h av e t o t y pe t h em a ll t h e t im e, t e ll
p yte st t o n ot l o ok i n to c erta in d ir e cto rie s f o r t e st f ile s, s p ecif y a m in im um p yte st v ers io n y our
t e sts a re w ritte n f o r, a n d m ore . T hese c o nfig ura tio n e le m en ts c an b e p ut i n
to x.i n i
o r
se tu p.c fg
a s
w ell.

In t h e f in al c h ap te r, C hap te r 7 ,
​Usin g p yte st w ith O th er T ools
​, y ou’ll l o ok a t h ow y ou c an t a k e
th e a lr e ad y p ow erfu l p yte st a n d s u perc h arg e y our t e stin g w ith c o m ple m en ta ry t o ols . Y ou’ll r u n
th e T ask s p ro je ct o n m ultip le v ers io ns o f P yth on w ith t o x. Y ou’ll t e st t h e T ask s C LI w hile n ot
hav in g t o r u n t h e r e st o f t h e s y ste m w ith m ock . Y ou’ll u se
co vera g e.p y
t o s e e i f a n y o f t h e T ask s
pro je ct s o urc e c o de i s n ’t b ein g t e ste d . Y ou’ll u se J e n kin s t o r u n t e st s u ite s a n d d is p la y r e su lts
over t im e. A nd f in ally , y ou’ll s e e h ow p yte st c an b e u se d t o r u n u nitte st t e sts , a s w ell a s s h are
pyte st s ty le f ix tu re s w ith u nitte st- b ase d t e sts .

W hat Y ou N eed t o K now
P yth on
You d on’t n eed t o k now a l o t o f P yth on. T he e x am ple s d on’t d o a n yth in g s u per w eir d o r
fa n cy .
p ip
You s h ould u se
pip
t o i n sta ll p yte st a n d p yte st p lu gin s. I f y ou w an t a r e fre sh er o n
pip
,
ch eck o ut A ppen dix 2 ,
​pip
​.
A c o m mand l in e
I w ro te t h is b ook a n d c ap tu re d t h e e x am ple o utp ut u sin g b ash o n a M ac l a p to p. H ow ev er,
th e o nly c o m man ds I u se i n b ash a re
cd
t o g o t o a s p ecif ic d ir e cto ry , a n d
pyte st
, o f c o urs e .
Sin ce
cd
e x is ts i n W in dow s
cm d.e x e
a n d a ll u nix s h ells t h at I k now o f, a ll e x am ple s s h ould
be r u nnab le o n w hate v er t e rm in al- lik e a p plic atio n y ou c h oose t o u se .
T hat’ s i t, r e ally . Y ou d on’t n eed t o b e a p ro gra m min g e x pert t o s ta rt w ritin g a u to m ate d s o ftw are
t e sts w ith p yte st.

E xam ple C od e a n d O nlin e R eso u rces
T he e x am ple s i n t h is b ook w ere w ritte n u sin g P yth on 3 .6 a n d p yte st 3 .2 . p yte st 3 .2 s u pports
P yth on 2 .6 , 2 .7 , a n d P yth on 3 .3 + .
T he s o urc e c o de f o r t h e T ask s p ro je ct, a s w ell a s f o r a ll o f t h e t e sts s h ow n i n t h is b ook, i s
a v aila b le t h ro ugh a l in k
[1 ]
o n t h e b ook’s w eb p ag e a t p ra g pro g.c o m .
[2 ]
Y ou d on’t n eed t o
d ow nlo ad t h e s o urc e c o de t o u nders ta n d t h e t e st c o de; t h e t e st c o de i s p re se n te d i n u sa b le f o rm i n
t h e e x am ple s. B ut t o f o llo w a lo ng w ith t h e T ask s p ro je ct, o r t o a d ap t t h e t e stin g e x am ple s t o t e st
y our o w n p ro je ct ( m ore p ow er t o y ou!), y ou m ust g o t o t h e b ook’s w eb p ag e t o d ow nlo ad t h e
T ask s p ro je ct. A ls o a v aila b le o n t h e b ook’s w eb p ag e i s a l in k t o p ost e rra ta
[3 ]
a n d a d is c u ssio n
f o ru m .
[4 ]
I ’v e b een p ro gra m min g f o r o ver t w en ty -fiv e y ears , a n d n oth in g h as m ad e m e l o ve w ritin g t e st
c o de a s m uch a s p yte st. I h ope y ou l e arn a l o t f ro m t h is b ook, a n d I h ope t h at y ou’ll e n d u p
l o vin g t e st c o de a s m uch a s I d o.
F ootn ote s
[ 1 ]
http s://p ra g pro g.c o m /title s/b opyte st/s o urc e_ co de
[ 2 ]
http s://p ra g pro g.c o m /title s/b opyte st
[ 3 ]
http s://p ra g pro g.c o m /title s/b opyte st/e rra ta
[ 4 ]
http s://f o ru m s.p ra g pro g.c o m /f o ru m s/4 38
C opyrig ht © 2 017, T he P ra g m atic B ooksh elf .

Chapter 1
Getting Started with pWHVt
This is a test:
ch1/test_one.py
​ ​def​ test_passing():
​ ​assert​ (1, 2, 3) == (1, 2, 3)
This is what it looks like when it’s run: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​test_one.py​
​ ===================== test session starts ======================
​ collected 1 items

​ test_one.p.

​ =================== 1 passed in 0.01 seconds ===================
The dot after test_one.pPHDQVWKDWRQHWHVWZDVUXQDQGLWSDVVHG,Iou need more
information, RXFDQXVHYRUYHUERVH: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_one.py​
​ ===================== test session starts ======================
​ collected 1 items

​ test_one.pWHVWBSDVVLQJ3$66(D

​ =================== 1 passed in 0.01 seconds ===================
If RXKDYHDFRORUWHUPLQDOWKH3$66('DQGERWWRPOLQHDUHJUHHQ,WVQLFH.
This is a failing test:
ch1/test_two.py
​ ​def​ test_failing():
​ ​assert​ (1, 2, 3) == (3, 2, 1)
The waStest shows RXWHVWIDLOXUHVLVRQHRIWKHPDQ reasons developers love pWHVW/HWs
watch this fail: ​ ​$ ​​pWHVt​​ ​​test_two.py​\0501\051

​ ===================== test session starts ======================
​ collected 1 items

​ test_two.pF

​ =========================== FAILURES ===========================
​ _________________________ test_failing _________________________

​ def test_failing():
​ > assert (1, 2, 3) == (3, 2, 1)
​ E assert (1, 2, 3) == (3, 2, 1)
​ E At index 0 diff: 1 != 3
​ E Use -v to get the full diff

​ test_two.p$VVHUWLRQ(UURr
​ =================== 1 failed in 0.04 seconds ===================
Cool. The failing test, test_failing, gets its own section to show us whLWIDLOHG$QGStest tells
us exactlZKDWWKHILUVWIDLOXUHLVLQGH[LVDPLVPDWFK0XFKRIWKLVLVLQUHGWRPDNHLWUHDOOy
stand out (if RXYHJRWDFRORUWHUPLQDO 7KDWVDOUHDG a lot of information, but there’s a line
that saV8VHYWRJHWWKHIXOOGLII/HWVGRWKDW: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_two.py​
​ ===================== test session starts ======================
​ collected 1 items

​ test_two.pWHVWBIDLOLQJ)$,/(D

​ =========================== FAILURES ===========================
​ _________________________ test_failing _________________________

​ def test_failing():
​ > assert (1, 2, 3) == (3, 2, 1)
​ E assert (1, 2, 3) == (3, 2, 1)
​ E At index 0 diff: 1 != 3
​ E Full diff:
​ E - (1, 2, 3)
​ E ? ^ ^
​ E + (3, 2, 1)
​ E ? ^ ^

​ test_two.p$VVHUWLRQ(UURr
​ =================== 1 failed in 0.04 seconds ===================\0502\051

Wow. pWHVWDGGVOLWWOHFDUHWV A WRVKRZXVH[DFWO what’s different.
If RXUHDOUHDG impressed with how easLWLVWRZULWHUHDGDQGUXQWHVWVZLWKStest, and how
easLWLVWRUHDGWKHRXWSXWWRVHHZKHUHWKHWHVWVIDLOZHOOou ain’t seen nothing HW7KHUHs
lots more where that came from. Stick around and let me show RXZK I think pWHVWLVWKe
absolute best test framework available.
In the rest of this chapter, RXOOLQVWDOOStest, look at different waVWRUXQLWDQGUXQWKURXJh
some of the most often used command-line options. In future chapters, RXOOOHDUQKRZWRZULWe
test functions that maximize the power of pWHVWKRZWRSXOOVHWXSFRGHLQWRVHWXSDQGWHDUGRZn
sections called fixtures, and how to use fixtures and plugins to reallVXSHUFKDUJHour software
testing.
But first, I have an apolog,PVRUU that the test, assert (1, 2, 3) == (3, 2, 1), is so boring.
Snore. No one would write a test like that in real life. Software tests are comprised of code that
tests other software that RXDUHQWDOZDs positive will work. And (1, 2, 3) == (1, 2, 3) will
alwaVZRUN7KDWVZK we won’t use overlVLOO tests like this in the rest of the book. We’ll
look at tests for a real software project. We’ll use an example project called Tasks that needs
some test code. HopefullLWVVLPSOHHQRXJKWREHHDV to understand, but not so simple as to be
boring.
Another great use of software tests is to test RXUDVVXPSWLRQVDERXWKRZWKHVRIWZDUHXQGHUWHVt
works, which can include testing RXUXQGHUVWDQGLQJRIWKLUGSDUW modules and packages, and
even builtin PWKRQGDWDVWUXFWXUHV7KH7DVNVSURMHFWXVHVDVWUXFWXUHFDOOHG7DVNZKLFKLs
based on the namedtuple factorPHWKRGZKLFKLVSDUWRIWKHVWDQGDUGOLEUDU. The Task structure
is used as a data structure to pass information between the UI and the API. For the rest of this
chapter, I’ll use Task to demonstrate running pWHVWDQGXVLQJVRPHIUHTXHQWO used command-
line options.
Here’s Task:​ ​from​ collections ​import​ namedtuple
​ Task = namedtuple(​'Task'​, [​'summary'​, ​'owner'​, ​'done'​, ​'id'​])
The namedtuple() factorIXQFWLRQKDVEHHQDURXQGVLQFH3thon 2.6, but I still find that many
PWKRQGHYHORSHUVGRQWNQRZKRZFRROLWLV$WWKHYHU least, using Task for test examples will
be more interesting than (1, 2, 3) == (1, 2, 3) or add(1, 2) == 3.
Before we jump into the examples, let’s take a step back and talk about how to get pWHVWDQd
install it.\0503\051

Getting pWHVt
The headquarters for pWHVWLV https://docs.pWHVWRUg
. That’s the official documentation. But it’s
distributed through P3, WKH3thon Package Index) at https://pSLSthon.org/pSLStest
.
Like other PWKRQSDFNDJHVGLVWULEXWHGWKURXJK3PI, use pip to install pWHVWLQWRWKHYLUWXDl
environment RXUHXVLQJIRUWHVWLQJ:
​ $ pip3 install -U virtualenv
​ $ pWKRQPYLUWXDOHQYYHQv
​ $ source venv/bin/activate
​ $ pip install pWHVt
If RXDUHQRWIDPLOLDUZLWKYLUWXDOHQYRUSLS,KDYHJRWou covered. Check out Appendix 1,
Virtual Environments
​ and Appendix 2, ​ pip ​.
What About Windows, PWKRQDQGYHQY?
The example for virtualenv and pip should work on man326,;Vstems, such as Linux and
macOS, and manYHUVLRQVRI3thon, including PWKRQDQGODWHU.
The source venv/bin/activate line won’t work for Windows, use venv\Scripts\activate.bat instead.
Do this:
​ C:\> pip3 install -U virtualenv
​ C:\> pWKRQPYLUWXDOHQYYHQv
​ C:\> venv\Scripts\activate.bat
​ (venv) C:\> pip install pWHVt
For PWKRQDQGDERYHou maJHWDZD with using venv instead of virtualenv, and Ru
don’t have to install it first. It’s included in PWKRQDQGDERYH+RZHYHU,YHKHDUGWKDWVRPe
platforms still behave better with virtualenv.\0504\051

Running pWHVt​ ​$ ​​pWHVt​​ ​​--help​
​ usage: pWHVW>RSWLRQV@>ILOHBRUBGLU@>ILOHBRUBGLU@>]
​ ​ ...​
Given no arguments, pWHVWORRNVDWour current directorDQGDOOVXEGLUHFWRULHVIRUWHVWILOHVDQd
runs the test code it finds. If RXJLYHStest a filename, a directorQDPHRUDOLVWRIWKRVHLt
looks there instead of the current director(DFKGLUHFWRU listed on the command line is
recursivelWUDYHUVHGWRORRNIRUWHVWFRGH.
For example, let’s create a subdirectorFDOOHGWDVNVDQGVWDUWZLWKWKLVWHVWILOH:
ch1/tasks/test_three.py
​ ​"""Test the Task data type."""​

​ ​from​ collections ​import​ namedtuple

​ Task = namedtuple(​'Task'​, [​'summary'​, ​'owner'​, ​'done'​, ​'id'​])
​ Task.__new__.__defaults__ = (None, None, False, None)


​ ​def​ test_defaults():
​ ​"""Using no parameters should invoke defaults."""​
​ t1 = Task()
​ t2 = Task(None, None, False, None)
​ ​assert​ t1 == t2


​ ​def​ test_member_access():
​ ​"""Check .field functionality of namedtuple."""​
​ t = Task(​'buy milk'​, ​'brian'​)
​ ​assert​ t.summar 'buy milk'​
​ ​assert​ t.owner == ​'brian'​
​ ​assert​ (t.done, t.id) == (False, None)
You can use __new__.__defaults__ to create Task objects without having to specifDOOWKe
fields. The test_defaults() test is there to demonstrate and validate how the defaults work.
The test_member_access() test is to demonstrate how to access members bQDPHDQGQRWEy
index, which is one of the main reasons to use namedtuples.
Let’s put a couple more tests into a second file to demonstrate the _asdict() and _replace()
functionalit:\0505\051

ch1/tasks/test_four.py
​ ​"""Test the Task data type."""​

​ ​from​ collections ​import​ namedtuple


​ Task = namedtuple(​'Task'​, [​'summary'​, ​'owner'​, ​'done'​, ​'id'​])
​ Task.__new__.__defaults__ = (None, None, False, None)


​ ​def​ test_asdict():
​ ​"""_asdict() should return a dictionary."""​
​ t_task = Task(​'do something'​, ​'okken'​, True, 21)
​ t_dict = t_task._asdict()
​ expected = {​'summary'​: ​'do something'​,
​ ​'owner'​: ​'okken'​,
​ ​'done'​: True,
​ ​'id'​: 21}
​ ​assert​ t_dict == expected


​ ​def​ test_replace():
​ ​"""replace() should change passed in fields."""​
​ t_before = Task(​'finish book'​, ​'brian'​, False)
​ t_after = t_before._replace(id=10, done=True)
​ t_expected = Task(​'finish book'​, ​'brian'​, True, 10)
​ ​assert​ t_after == t_expected
To run pWHVWou have the option to specifILOHVDQGGLUHFWRULHV,Iou don’t specifDQ files
or directories, pWHVWZLOOORRNIRUWHVWVLQWKHFXUUHQWZRUNLQJGLUHFWRU and subdirectories. It
looks for files starting with test_ or ending with _test. From the ch1 directorLIou run pWHVt
with no commands, RXOOUXQIRXUILOHVZRUWKRIWHVWV: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​
​ ===================== test session starts ======================
​ collected 6 items

​ test_one.p.
​ test_two.pF
​ tasks/test_four.p.
​ tasks/test_three.p.
​ \0506\051

​ =========================== FAILURES ===========================
​ _________________________ test_failing _________________________

​ def test_failing():
​ > assert (1, 2, 3) == (3, 2, 1)
​ E assert (1, 2, 3) == (3, 2, 1)
​ E At index 0 diff: 1 != 3
​ E Use -v to get the full diff

​ test_two.p$VVHUWLRQ(UURr
​ ============== 1 failed, 5 passed in 0.08 seconds ==============
To get just our new task tests to run, RXFDQJLYHStest all the filenames RXZDQWUXQRUWKe
directorRUFDOOStest from the directorZKHUHRXUWHVWVDUH: ​ ​$ ​​pWHVt​​ ​​tasks/test_three.py​​ ​​tasks/test_four.py​
​ ===================== test session starts ======================
​ collected 4 items

​ tasks/test_three.p.
​ tasks/test_four.p.

​ =================== 4 passed in 0.02 seconds ===================
​ ​$ ​​pWHVt​​ ​​tasks​
​ ===================== test session starts ======================
​ collected 4 items

​ tasks/test_four.p.
​ tasks/test_three.p.

​ =================== 4 passed in 0.03 seconds ===================
​ ​$ ​​cd​​ ​​/path/to/code/ch1/tasks​
​ ​$ ​​pWHVt​
​ ===================== test session starts ======================
​ collected 4 items

​ test_four.p.
​ test_three.p.

​ =================== 4 passed in 0.02 seconds ===================
The part of pWHVWH[HFXWLRQZKHUHStest goes off and finds which tests to run is called test
discoverStest was able to find all the tests we wanted it to run because we named them\0507\051

according to the pWHVWQDPLQJFRQYHQWLRQV+HUHVDEULHIRYHUYLHZRIWKHQDPLQJFRQYHQWLRQs
to keep RXUWHVWFRGHGLVFRYHUDEOHE pWHVW:
Test files should be named test_.pRUVRPHWKLQJ!BWHVWS.
Test methods and functions should be named test_.
Test classes should be named Test.
Since our test files and functions start with test_, we’re good. There are waVWRDOWHUWKHVe
discoverUXOHVLIou have a bunch of tests named differentl,OOFRYHUWKDWLQ&KDSWHU,
Configuration
​.
Let’s take a closer look at the output of running just one file:
​ ​$ ​​cd​​ ​​/path/to/code/ch1/tasks​
​ ​$ ​​pWHVt​​ ​​test_three.py​
​ ================= test session starts ==================
​ platform darwin -- PWKRQStest-3.2.1, pSOXJJ-0.4.0
​ rootdir: /path/to/code/ch1/tasks, inifile:
​ collected 2 items

​ test_three.p.

​ =============== 2 passed in 0.01 seconds ===============
The output tells us quite a bit.
===== test session starts ==== pWHVWSURYLGHVDQLFHGHOLPLWHUIRUWKHVWDUWRIWKHWHVWVHVVLRQ$VHVVLRQLVRQHLQYRFDWLRn
of pWHVWLQFOXGLQJDOORIWKHWHVWVUXQRQSRVVLEO multiple directories. This definition of
session becomes important when I talk about session scope in relation to pWHVWIL[WXUHVLn

Specifying Fixture Scope ​.
platform darwin -- Python 3.6.2, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 platform darwin is a Mac thing. This is different on a Windows machine. The PWKRQDQd
pWHVWYHUVLRQVDUHOLVWHGDVZHOODVWKHSDFNDJHVStest depends on. Both pDQGSOXJJy
are packages developed bWKHStest team to help with the implementation of pWHVW.
rootdir: /path/to/code/ch1/tasks, inifile: The rootdir is the topmost common directorWRDOORIWKHGLUHFWRULHVEHLQJVHDUFKHGIRr
test code. The inifile (blank here) lists the configuration file being used. Configuration files
could be pWHVWLQLWR[LQLRUVHWXSFIJ<RXOOORRNDWFRQILJXUDWLRQILOHVLQPRUHGHWDLOLn
Chapter 6, ​
Configuration ​.
collected 2 items These are the two test functions in the file.\0508\051

test_three.py ..The test_three.pVKRZVWKHILOHEHLQJWHVWHG7KHUHLVRQHOLQHIRUHDFKWHVWILOH7KHWZo
dots denote that the tests passed—one dot for each test function or method. Dots are only
for passing tests. Failures, errors, skips, xfails, and xpasses are denoted with F, E, s, x, and
X, respectivel,Iou want to see more than dots for passing tests, use the -v or --verbose
option.
== 2 passed in 0.01 seconds == This refers to the number of passing tests and how long the entire test session took. If non-
passing tests were present, the number of each categorZRXOGEHOLVWHGKHUHDVZHOO.
The outcome of a test is the primarZD the person running a test or looking at the results
understands what happened in the test run. In pWHVWWHVWIXQFWLRQVPD have several different
outcomes, not just pass or fail.
Here are the possible outcomes of a test function:
PASSED (.): The test ran successfull.
FAILED (F): The test did not run successfull RU;3$66VWULFW .
SKIPPED (s): The test was skipped. You can tell pWHVWWRVNLSDWHVWE using either the
@pWHVWPDUNVNLS RUStest.mark.skipif() decorators, discussed in ​
Skipping Tests ​.
xfail (x): The test was not supposed to pass, ran, and failed. You can tell pWHVWWKDWDWHVt
is expected to fail bXVLQJWKH#Stest.mark.xfail() decorator, discussed in ​
Marking Tests
as Expecting to Fail ​.
XPASS (X): The test was not supposed to pass, ran, and passed.
ERROR (E): An exception happened outside of the test function, in either a fixture,
discussed in Chapter 3, ​
pytest Fixtures ​, or in a hook function, discussed in Chapter 5,

Plugins ​.\0509\051

Running Onl2QH7HVt
One of the first things RXOOZDQWWRGRRQFHou’ve started writing tests is to run just one.
SpecifWKHILOHGLUHFWO, and add a ::test_name, like this:​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​tasks/test_four.pWHVWBDVGLFt​
​ =================== test session starts ===================
​ collected 3 items

​ tasks/test_four.pWHVWBDVGLFW3$66(D

​ ================ 1 passed in 0.01 seconds =================
Now, let’s take a look at some of the options.\05010\051

Using Options
We’ve used the verbose option, -v or --verbose, a couple of times alreadEXWWKHUHDUHPDQy
more options worth knowing about. We’re not going to use all of the options in this book, but
quite a few. You can see all of them with pWHVWKHOS.
The following are a handful of options that are quite useful when starting out with pWHVW7KLVLs
bQRPHDQVDFRPSOHWHOLVWEXWWKHVHRSWLRQVLQSDUWLFXODUDGGUHVVVRPHFRPPRQHDUO desires
for controlling how pWHVWUXQVZKHQou’re first getting started.​ ​$ ​​pWHVt​​ ​​--help​
​ ​ ...​​ ​​subset​​ ​​of​​ ​​the ​​ ​​list​​ ​​...​
​ -k EXPRESSION onlUXQWHVWVFODVVHVZKLFKPDWFKWKHJLYHn
​ substring expression.
​ Example: -k 'test_method or test_other' matches
​ all test functions and classes whose name
​ contains 'test_method' or 'test_other'.
​ -m MARKEXPR onlUXQWHVWVPDWFKLQJJLYHQPDUNH[SUHVVLRQ.
​ example: -m 'mark1 and not mark2'.
​ -x, --exitfirst exit instantlRQILUVWHUURURUIDLOHGWHVW.
​ --maxfail=num exit after first num failures or errors.
​ --capture=method per-test capturing method: one of fd|sV_QR.
​ -s shortcut for --capture=no.
​ --lf, --last-failed rerun onlWKHWHVWVWKDWIDLOHGODVWWLPe
​ (or all if none failed)
​ --ff, --failed-first run all tests but run the last failures first.
​ -v, --verbose increase verbosit.
​ -q, --quiet decrease verbosit.
​ -l, --showlocals show locals in tracebacks (disabled bGHIDXOW .
​ --tb=stOHWUDFHEDFNSULQWPRGH DXWRORQJVKRUWOLQHQDWLYHQR .
​ --durations=N show N slowest setup/test durations (N=0 for all).
​ --collect-onlRQO collect tests, don't execute them.
​ --version displaStest lib version and import information.
​ -h, --help show help message and configuration info
–collect-only
The --collect-onlRSWLRQVKRZVou which tests will be run with the given options and
configuration. It’s convenient to show this option first so that the output can be used as a
reference for the rest of the examples. If RXVWDUWLQWKHFKGLUHFWRU, RXVKRXOGVHHDOORIWKe
test functions RXYHORRNHGDWVRIDULQWKLVFKDSWHU: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​\05011\051

​ ​$ ​​pWHVt​​ ​​--collect-only​
​ =================== test session starts ===================
​ collected 6 items
>

>

>


>



​ ============== no tests ran in 0.03 seconds ===============
The --collect-onlRSWLRQLVKHOSIXOWRFKHFNLIRWKHURSWLRQVWKDWVHOHFWWHVWVDUHFRUUHFWEHIRUe
running the tests. We’ll use it again with -k to show how that works.
-k EXPRESSION
The -k option lets RXXVHDQH[SUHVVLRQWRILQGZKDWWHVWIXQFWLRQVWRUXQ3UHWW powerful. It can
be used as a shortcut to running an individual test if its name is unique, or running a set of tests
that have a common prefix or suffix in their names. Let’s saou want to run the test_asdict()
and test_defaults() tests. You can test out the filter with --collect-onl: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​-k​​ ​​"asdict or defaults"​​ ​​--collect-only​
​ =================== test session starts ===================
​ collected 6 items
>

>


​ =================== 4 tests deselected ====================
​ ============== 4 deselected in 0.03 seconds ===============
Yep. That looks like what we want. Now RXFDQUXQWKHPE removing the --collect-onl: ​ ​$ ​​pWHVt​​ ​​-k​​ ​​"asdict or defaults"​
​ =================== test session starts ===================
​ collected 6 items
​ \05012\051

​ tasks/test_four.p.
​ tasks/test_three.p.

​ =================== 4 tests deselected ====================
​ ========= 2 passed, 4 deselected in 0.03 seconds ==========
Hmm. Just dots. So theSDVVHG%XWZHUHWKH the right tests? One waWRILQGRXWLVWRXVHYRr
--verbose: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​-k​​ ​​"asdict or defaults"​
​ =================== test session starts ===================
​ collected 6 items

​ tasks/test_four.pWHVWBDVGLFW3$66(D
​ tasks/test_three.pWHVWBGHIDXOWV3$66(D

​ =================== 4 tests deselected ====================
​ ========= 2 passed, 4 deselected in 0.02 seconds ==========
Yep. TheZHUHWKHFRUUHFWWHVWV.
-m MARKEXPR
Markers are one of the best waVWRPDUNDVXEVHWRIour test functions so that theFDQEHUXn
together. As an example, one waWRUXQWHVWBUHSODFH DQGWHVWBPHPEHUBDFFHVV HYHQWKRXJh
theDUHLQVHSDUDWHILOHVLVWRPDUNWKHP.
You can use anPDUNHUQDPH/HWVVD RXZDQWWRXVHUXQBWKHVHBSOHDVH<RXGPDUNDWHVt
using the decorator @pWHVWPDUNUXQBWKHVHBSOHDVHOLNHVR: ​ ​import​ pWHVt

​ ...
​ @pWHVWPDUNUXQBWKHVHBSOHDVe
​ ​def​ test_member_access():
​ ...
Then RXGGRWKHVDPHIRUWHVWBUHSODFH <RXFDQWKHQUXQDOOWKHWHVWVZLWKWKHVDPHPDUNHr
with pWHVWPUXQBWKHVHBSOHDVH: ​ ​$ ​​cd​​ ​​/path/to/code/ch1/tasks​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-m​​ ​​run_these_please ​
​ ================== test session starts ===================
​ collected 4 items
​ \05013\051

​ test_four.pWHVWBUHSODFH3$66(D
​ test_three.pWHVWBPHPEHUBDFFHVV3$66(D

​ =================== 2 tests deselected ===================
​ ========= 2 passed, 2 deselected in 0.02 seconds =========
The marker expression doesn’t have to be a single marker. You can saWKLQJVOLNHPPDUN1
and mark2" for tests with both markers, -m "mark1 and not mark2" for tests that have mark1 but
not mark2, -m "mark1 or mark2" for tests with either, and so on. I’ll discuss markers more
completelLQ ​
Marking Test Functions ​.
-x, –exitfirst
Normal pWHVWEHKDYLRULVWRUXQHYHU test it finds. If a test function encounters a failing assert or
an exception, the execution for that test stops there and the test fails. And then pWHVWUXQVWKe
next test. Most of the time, this is what RXZDQW+RZHYHUHVSHFLDOO when debugging a
problem, stopping the entire test session immediatelZKHQDWHVWIDLOVLVWKHULJKWWKLQJWRGR.
That’s what the -x option does.
Let’s trLWRQWKHVL[WHVWVZHKDYHVRIDU: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​-x​
​ =================== test session starts ===================
​ collected 6 items

​ test_one.p.
​ test_two.pF

​ ======================== FAILURES =========================
​ ______________________ test_failing _______________________

​ def test_failing():
​ > assert (1, 2, 3) == (3, 2, 1)
​ E assert (1, 2, 3) == (3, 2, 1)
​ E At index 0 diff: 1 != 3
​ E Use -v to get the full diff

​ test_two.p$VVHUWLRQ(UURr
​ !!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!
​ =========== 1 failed, 1 passed in 0.25 seconds ============
Near the top of the output RXVHHWKDWDOOVL[WHVWV RULWHPV ZHUHFROOHFWHGDQGLQWKHERWWRm
line RXVHHWKDWRQHWHVWIDLOHGDQGRQHSDVVHGDQGStest displaVWKH,QWHUUXSWHGOLQHWRWHOl
us that it stopped.\05014\051

Without -x, all six tests would have run. Let’s run it again without the -x. Let’s also use --tb=no
to turn off the stack trace, since RXYHDOUHDG seen it and don’t need to see it again:​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​--tb=no​
​ =================== test session starts ===================
​ collected 6 items

​ test_one.p.
​ test_two.pF
​ tasks/test_four.p.
​ tasks/test_three.p.

​ =========== 1 failed, 5 passed in 0.09 seconds ============
This demonstrates that without the -x, pWHVWQRWHVIDLOXUHLQWHVWBWZRS and continues on with
further testing.
–maxfail=num
The -x option stops after one test failure. If RXZDQWWROHWVRPHIDLOXUHVKDSSHQEXWQRWDWRQ,
use the --maxfail option to specifKRZPDQ failures are okaZLWKou.
It’s hard to reallVKRZWKLVZLWKRQO one failing test in our sVWHPVRIDUEXWOHWVWDNHDORRk
anZD. Since there is onlRQHIDLOXUHLIZHVHWPD[IDLO DOORIWKHWHVWVVKRXOGUXQDQG-
maxfail=1 should act just like -x: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​--maxfail=2​​ ​​--tb=no​
​ =================== test session starts ===================
​ collected 6 items

​ test_one.p.
​ test_two.pF
​ tasks/test_four.p.
​ tasks/test_three.p.

​ =========== 1 failed, 5 passed in 0.08 seconds ============
​ ​$ ​​pWHVt​​ ​​--maxfail=1​​ ​​--tb=no​
​ =================== test session starts ===================
​ collected 6 items

​ test_one.p.
​ test_two.pF\05015\051


​ !!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!
​ =========== 1 failed, 1 passed in 0.19 seconds ============
Again, we used --tb=no to turn off the traceback.
-s and –capture=method
The -s flag allows print statements—or reallDQ output that normallZRXOGEHSULQWHGWRVWGRXt
—to actuallEHSULQWHGWRVWGRXWZKLOHWKHWHVWVDUHUXQQLQJ,WLVDVKRUWFXWIRUFDSWXUH QR.
This makes sense once RXXQGHUVWDQGWKDWQRUPDOO the output is captured on all tests. Failing
tests will have the output reported after the test runs on the assumption that the output will help
RXXQGHUVWDQGZKDWZHQWZURQJ7KHVRUFDSWXUH QRRSWLRQWXUQVRIIRXWSXWFDSWXUH:KHn
developing tests, I find it useful to add several print() statements so that I can watch the flow of
the test.
Another option that maKHOSou to not need print statements in RXUFRGHLVOVKRZORFDOV,
which prints out the local variables in a test if the test fails.
Other options for capture method are --capture=fd and --capture=sV7KHFDSWXUH Vs option
replaces sVVWGRXWVWGHUUZLWKLQPHPILOHV7KHFDSWXUH IGRSWLRQSRLQWVILOHGHVFULSWRUVDQd
2 to a temp file.
I’m including descriptions of sVDQGIGIRUFRPSOHWHQHVV%XWWREHKRQHVW,YHQHYHUQHHGHGRr
used either. I frequentlXVHV$QGWRIXOO describe how -s works, I needed to touch on capture
methods.
We don’t have anSULQWVWDWHPHQWVLQRXUWHVWVet; a demo would be pointless. However, I
encourage RXWRSOD with this a bit so RXVHHLWLQDFWLRQ.
–lf, –last-failed
When one or more tests fails, having a convenient waWRUXQMXVWWKHIDLOLQJWHVWVLVKHOSIXOIRr
debugging. Just use --lf and RXUHUHDG to debug: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​--lf​
​ =================== test session starts ===================
​ run-last-failure: rerun last 1 failures
​ collected 6 items

​ test_two.pF

​ ======================== FAILURES =========================
​ ______________________ test_failing _______________________

​ def test_failing():\05016\051

​ > assert (1, 2, 3) == (3, 2, 1)
​ E assert (1, 2, 3) == (3, 2, 1)
​ E At index 0 diff: 1 != 3
​ E Use -v to get the full diff

​ test_two.p$VVHUWLRQ(UURr
​ =================== 5 tests deselected ====================
​ ========= 1 failed, 5 deselected in 0.08 seconds ==========
This is great if RXYHEHHQXVLQJDWERSWLRQWKDWKLGHVVRPHLQIRUPDWLRQDQGou want to re-
run the failures with a different traceback option.
–ff, –failed-first
The --ff/--failed-first option will do the same as --last-failed, and then run the rest of the tests that
passed last time: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​--ff​​ ​​--tb=no​
​ =================== test session starts ===================
​ run-last-failure: rerun last 1 failures first
​ collected 6 items

​ test_two.pF
​ test_one.p.
​ tasks/test_four.p.
​ tasks/test_three.p.

​ =========== 1 failed, 5 passed in 0.09 seconds ============
UsuallWHVWBIDLOLQJ IURPWHVW?BWZRS is run after test\_one.p+RZHYHUEHFDXVe
test_failing() failed last time, --ff causes it to be run first.
-v, –verbose
The -v/--verbose option reports more information than without it. The most obvious difference is
that each test gets its own line, and the name of the test and the outcome are spelled out instead
of indicated with just a dot.
We’ve used it quite a bit alreadEXWOHWVUXQLWDJDLQIRUIXQLQFRQMXQFWLRQZLWKIIDQG-
tb=no: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​--ff​​ ​​--tb=no​
​ =================== test session starts ===================\05017\051

​ run-last-failure: rerun last 1 failures first
​ collected 6 items

​ test_two.pWHVWBIDLOLQJ)$,/(D
​ test_one.pWHVWBSDVVLQJ3$66(D
​ tasks/test_four.pWHVWBDVGLFW3$66(D
​ tasks/test_four.pWHVWBUHSODFH3$66(D
​ tasks/test_three.pWHVWBGHIDXOWV3$66(D
​ tasks/test_three.pWHVWBPHPEHUBDFFHVV3$66(D

​ =========== 1 failed, 5 passed in 0.07 seconds ============
With color terminals, RXGVHHUHG)$,/('DQGJUHHQ3$66('RXWFRPHVLQWKHUHSRUWDVZHOO.
-q, –quiet
The -q/--quiet option is the opposite of -v/--verbose; it decreases the information reported. I like
to use it in conjunction with --tb=line, which reports just the failing line of anIDLOLQJWHVWV.
Let’s trTE itself: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​-q​
​ .F....
​ ======================== FAILURES =========================
​ ______________________ test_failing _______________________

​ def test_failing():
​ > assert (1, 2, 3) == (3, 2, 1)
​ E assert (1, 2, 3) == (3, 2, 1)
​ E At index 0 diff: 1 != 3
​ E Full diff:
​ E - (1, 2, 3)
​ E ? ^ ^
​ E + (3, 2, 1)
​ E ? ^ ^

​ test_two.p$VVHUWLRQ(UURr
​ 1 failed, 5 passed in 0.08 seconds
The -q option makes the output prettWHUVHEXWLWVXVXDOO enough. We’ll use the -q option
frequentlLQWKHUHVWRIWKHERRN DVZHOODVWE QR WROLPLWWKHRXWSXWWRZKDWZHDUe
specificallWUing to understand at the time.\05018\051

-l, –showlocals
If RXXVHWKHOVKRZORFDOVRSWLRQORFDOYDULDEOHVDQGWKHLUYDOXHVDUHGLVSODed with
tracebacks for failing tests.
So far, we don’t have anIDLOLQJWHVWVWKDWKDYHORFDOYDULDEOHV,I,WDNHWKHWHVWBUHSODFH WHVt
and change​ t_expected = Task(​'finish book'​, ​'brian'​, True, 10)
to ​ t_expected = Task(​'finish book'​, ​'brian'​, True, 11)
the 10 and 11 should cause a failure. AnFKDQJHWRWKHH[SHFWHGYDOXHZLOOFDXVHDIDLOXUH%Xt
this is enough to demonstrate the command-line option --l/--showlocals: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​-l ​​ ​​tasks​
​ =================== test session starts ===================
​ collected 4 items

​ tasks/test_four.pF
​ tasks/test_three.p.




​ ======================== FAILURES =========================
​ ______________________ test_replace _______________________

​ def test_replace():
​ t_before = Task('finish book', 'brian', False)
​ t_after = t_before._replace(id=10, done=True)
​ t_expected = Task('finish book', 'brian', True, 11)
​ > assert t_after == t_expected
​ E AssertionError: assert Task(summar H 7UXHLG   7DVN(
​ summar
H 7UXHLG )
​ E At index 3 diff: 10 != 11
​ E Use -v to get the full diff

​ t_after = Task(summar
ILQLVKERRN
RZQHU
EULDQ
GRQH 7UXHLG )
​ t_before = Task(summar
ILQLVKERRN
RZQHU
EULDQ
GRQH )DOVHLG 1RQH)
​ t_expected = Task(summar
ILQLVKERRN
RZQHU
EULDQ
GRQH 7UXHLG )\05019\051


​ tasks/test_four.p$VVHUWLRQ(UURr
​ =========== 1 failed, 3 passed in 0.08 seconds ============
The local variables t_after, t_before, and t_expected are shown after the code snippet, with the
value theFRQWDLQHGDWWKHWLPHRIWKHIDLOHGDVVHUW.
–tb=stOe
The --tb=stOHRSWLRQPRGLILHVWKHZD tracebacks for failures are output. When a test fails,
pWHVWOLVWVWKHIDLOXUHVDQGZKDWVFDOOHGDWUDFHEDFNZKLFKVKRZVou the exact line where the
failure occurred. Although tracebacks are helpful most of time, there maEHWLPHVZKHQWKH get
annoLQJ7KDWVZKHUHWKHWE VWle option comes in hand7KHVWles I find useful are short,
line, and no. short prints just the assert line and the E evaluated line with no context; line keeps
the failure to one line; no removes the traceback entirel.
Let’s leave the modification to test_replace() to make it fail and run it with different traceback
stOHV.
--tb=no removes the traceback entirel: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​--tb=no​​ ​​tasks​
​ =================== test session starts ===================
​ collected 4 items

​ tasks/test_four.pF
​ tasks/test_three.p.

​ =========== 1 failed, 3 passed in 0.04 seconds ============
--tb=line in manFDVHVLVHQRXJKWRWHOOZKDWVZURQJ,Iou have a ton of failing tests, this
option can help to show a pattern in the failures: ​ ​$ ​​pWHVt​​ ​​--tb=line ​​ ​​tasks​
​ =================== test session starts ===================
​ collected 4 items

​ tasks/test_four.pF
​ tasks/test_three.p.

​ ======================== FAILURES =========================
​ /path/to/code/ch1/tasks/test_four.p:
​ AssertionError: assert Task(summar H 7UXHLG   7DVN(
​ summar
H 7UXHLG )
​ =========== 1 failed, 3 passed in 0.05 seconds ============\05020\051

The next step up in verbose tracebacks is --tb=short:​ ​$ ​​pWHVt​​ ​​--tb=short​​ ​​tasks​
​ =================== test session starts ===================
​ collected 4 items

​ tasks/test_four.pF
​ tasks/test_three.p.

​ ======================== FAILURES =========================
​ ______________________ test_replace _______________________
​ tasks/test_four.pLQWHVWBUHSODFe
​ assert t_after == t_expected
​ E AssertionError: assert Task(summar H 7UXHLG   7DVN(
​ summar
H 7UXHLG )
​ E At index 3 diff: 10 != 11
​ E Use -v to get the full diff
​ =========== 1 failed, 3 passed in 0.04 seconds ============
That’s definitelHQRXJKWRWHOOou what’s going on.
There are three remaining traceback choices that we haven’t covered so far.
pWHVWWE ORQJZLOOVKRZou the most exhaustive, informative traceback possible. pWHVW-
tb=auto will show RXWKHORQJYHUVLRQIRUWKHILUVWDQGODVWWUDFHEDFNVLIou have multiple
failures. This is the default behavior. pWHVWWE QDWLYHZLOOVKRZou the standard library
traceback without anH[WUDLQIRUPDWLRQ.
–durations=N
The --durations=N option is incrediblKHOSIXOZKHQou’re trLQJWRVSHHGXSour test suite. It
doesn’t change how RXUWHVWVDUHUXQLWUHSRUWVWKHVORZHVW1QXPEHURIWHVWVVHWXSVWHDUGRZQs
after the tests run. If RXSDVVLQGXUDWLRQV LWUHSRUWVHYHUthing in order of slowest to fastest.
None of our tests are long, so I’ll add a time.sleep(0.1) to one of the tests. Guess which one: ​ ​$ ​​cd​​ ​​/path/to/code/ch1​
​ ​$ ​​pWHVt​​ ​​--durations=3​​ ​​tasks​
​ ================= test session starts =================
​ collected 4 items

​ tasks/test_four.p.
​ tasks/test_three.p.
​ \05021\051

​ ============== slowest 3 test durations ===============
​ 0.10s call tasks/test_four.pWHVWBUHSODFe
​ 0.00s setup tasks/test_three.pWHVWBGHIDXOWs
​ 0.00s teardown tasks/test_three.pWHVWBPHPEHUBDFFHVs
​ ============== 4 passed in 0.13 seconds ===============
The slow test with the extra sleep shows up right awaZLWKWKHODEHOFDOOIROORZHGE setup and
teardown. EverWHVWHVVHQWLDOO has three phases: call, setup, and teardown. Setup and teardown
are also called fixtures and are a chance for RXWRDGGFRGHWRJHWGDWDRUWKHVRIWZDUHVstem
under test into a precondition state before the test runs, as well as clean up afterwards if
necessar,FRYHUIL[WXUHVLQGHSWKLQ&KDSWHU ​
pytest Fixtures ​.
–version
The --version option shows the version of pWHVWDQGWKHGLUHFWRU where it’s installed: ​ ​$ ​​pWHVt​​ ​​--version​
​ This is pWHVWYHUVLRQLPSRUWHGIURm
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHVStest.py
Since we installed pWHVWLQWRDYLUWXDOHQYLURQPHQWStest will be located in the site-packages
directorRIWKDWYLUWXDOHQYLURQPHQW.
-h, –help
The -h/--help option is quite helpful, even after RXJHWXVHGWRStest. Not onlGRHVLWVKRZou
how to use stock pWHVWEXWLWDOVRH[SDQGVDVou install plugins to show options and
configuration variables added bSOXJLQV.
The -h option shows:
usage: pWHVW>RSWLRQV@>ILOHBRUBGLU@>ILOHBRUBGLU@>]
Command-line options and a short description, including options added via plugins
A list of options available to ini stOHFRQILJXUDWLRQILOHVZKLFK,OOGLVFXVVPRUHLn
Chapter 6, ​
Configuration ​
A list of environmental variables that can affect pWHVWEHKDYLRU DOVRGLVFXVVHGLQ&KDSWHr
6, ​
Configuration ​)
A reminder that pWHVWPDUNHUVFDQEHXVHGWRVHHDYDLODEOHPDUNHUVGLVFXVVHGLn
Chapter 2, ​
Writing Test Functions ​
A reminder that pWHVWIL[WXUHVFDQEHXVHGWRVHHDYDLODEOHIL[WXUHVGLVFXVVHGLQ&KDSWHr
3, ​
pytest Fixtures ​
The last bit of information the help text displaVLVWKLVQRWH:\05022\051

​ (shown according to specified file_or_dir or current dir if not specified)
This note is important because the options, markers, and fixtures can change based on which
directorRUWHVWILOHou’re running. This is because along the path to a specified file or director,
pWHVWPD find conftest.pILOHVWKDWFDQLQFOXGHKRRNIXQFWLRQVWKDWFUHDWHQHZRSWLRQVIL[WXUe
definitions, and marker definitions.
The abilitWRFXVWRPL]HWKHEHKDYLRURIStest in conftest.pILOHVDQGWHVWILOHVDOORZs
customized behavior local to a project or even a subset of the tests for a project. You’ll learn
about conftest.pDQGLQLILOHVVXFKDVStest.ini in Chapter 6, ​
Configuration ​.\05023\051

Exercises1. Create a new virtual environment using pWKRQPYLUWXDOHQYRUSthon -m venv. Even ifRXNQRZou don’t need virtual environments for the project RXUHZRUNLQJRQKXPRr
me and learn enough about them to create one for trLQJRXWWKLQJVLQWKLVERRN,UHVLVWHd
using them for a verORQJWLPHDQGQRZ,DOZDs use them. Read Appendix 1, ​
Virtual
Environments ​ if RXUHKDYLQJDQ difficult.
2. Practice activating and deactivating RXUYLUWXDOHQYLURQPHQWDIHZWLPHV.
$ source venv/bin/activate
$ deactivate
On Windows:
C:\Users\okken\sandbox>venv\scripts\activate.bat
C:\Users\okken\sandbox>deactivate
3. Install pWHVWLQour new virtual environment. See Appendix 2, ​
pip ​ if RXKDYHDQy
trouble. Even if RXWKRXJKWou alreadKDGStest installed, RXOOQHHGWRLQVWDOOLWLQWo
the virtual environment RXMXVWFUHDWHG.
4. Create a few test files. You can use the ones we used in this chapter or make up RXURZQ. Practice running pWHVWDJDLQVWWKHVHILOHV.
5. Change the assert statements. Don’t just use assert something == something_else; try things like:
assert 1 in [2, 3, 4]
assert a < b
assert ’fizz’ not in ’fizzbuzz’\05024\051

What’s Next
In this chapter, we looked at where to get pWHVWDQGWKHYDULRXVZDs to run it. However, we
didn’t discuss what goes into test functions. In the next chapter, we’ll look at writing test
functions, parametrizing them so theJHWFDOOHGZLWKGLIIHUHQWGDWDDQGJURXSLQJWHVWVLQWo
classes, modules, and packages.
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.\05025\051

Chapter 2
Writing Test Functions
In the last chapter, RXJRWStest up and running. You saw how to run it against files and
directories and how manRIWKHRSWLRQVZRUNHG,QWKLVFKDSWHUou’ll learn how to write test
functions in the context of testing a PWKRQSDFNDJH,Iou’re using pWHVWWRWHVWVRPHWKLQg
other than a PWKRQSDFNDJHPRVWRIWKLVFKDSWHUVWLOODSSOLHV.
We’re going to write tests for the Tasks package. Before we do that, I’ll talk about the structure
of a distributable PWKRQSDFNDJHDQGWKHWHVWVIRULWDQGKRZWRJHWWKHWHVWVDEOHWRVHHWKe
package under test. Then I’ll show RXKRZWRXVHDVVHUWLQWHVWVKRZWHVWVKDQGOHXQH[SHFWHd
exceptions, and testing for expected exceptions.
EventuallZHOOKDYHDORWRIWHVWV7KHUHIRUHou’ll learn how to organize tests into classes,
modules, and directories. I’ll then show RXKRZWRXVHPDUNHUVWRPDUNZKLFKWHVWVou want to
run and discuss how builtin markers can help RXVNLSWHVWVDQGPDUNWHVWVDVH[SHFWLQJWRIDLO.
Finall,OOFRYHUSDUDPHWUL]LQJWHVWVZKLFKDOORZVWHVWVWRJHWFDOOHGZLWKGLIIHUHQWGDWD.\05026\051

Testing a Package
We’ll use the sample project, Tasks, as discussed in ​
The Tasks Project ​, to see how to write test
functions for a PWKRQSDFNDJH7DVNVLVD3thon package that includes a command-line tool of
the same name, tasks.
Appendix 4, ​
Packaging and Distributing Python Projects ​ includes an explanation of how to
distribute RXUSURMHFWVORFDOO within a small team or globallWKURXJK3PI, so I won’t go into
detail of how to do that here; however, let’s take a quick look at what’s in the Tasks project and
how the different files fit into the storRIWHVWLQJWKLVSURMHFW.
Following is the file structure for the Tasks project:
​ tasks_proj/
​ ├── CHANGELOG.rst
​ ├── LICENSE
​ ├── MANIFEST.in
​ ├── README.rst
​ ├── setup.py
​ ├── src
​ │ └── tasks
​ │ ├── __init__.py
​ │ ├── api.py
​ │ ├── cli.py
​ │ ├── config.py
​ │ ├── tasksdb_pPRQJRSy
​ │ └── tasksdb_tinGESy
​ └── tests
​ ├── conftest.py
​ ├── pWHVWLQi
​ ├── func
​ │ ├── __init__.py
​ │ ├── test_add.py
​ │ └── ...
​ └── unit
​ ├── __init__.py
​ ├── test_task.py
​ └── ...
I included the complete listing of the project (with the exception of the full list of test files) to
point out how the tests fit in with the rest of the project, and to point out a few files that are of
keLPSRUWDQFHWRWHVWLQJQDPHO conftest.pStest.ini, the various __init__.pILOHVDQd
setup.p.\05027\051

All of the tests are kept in tests and separate from the package source files in src. This isn’t a
requirement of pWHVWEXWLWVDEHVWSUDFWLFH.
All of the top-level files, CHANGELOG.rst, LICENSE, README.rst, MANIFEST.in, and
setup.pDUHGLVFXVVHGLQPRUHGHWDLOLQ$SSHQGL[ ​
Packaging and Distributing Python
Projects ​. Although setup.pLVLPSRUWDQWIRUEXLOGLQJDGLVWULEXWLRQRXWRIDSDFNDJHLWVDOVo
crucial for being able to install a package locallVRWKDWWKHSDFNDJHLVDYDLODEOHIRULPSRUW.
Functional and unit tests are separated into their own directories. This is an arbitrarGHFLVLRQDQd
not required. However, organizing test files into multiple directories allows RXWRHDVLO run a
subset of tests. I like to keep functional and unit tests separate because functional tests should
onlEUHDNLIZHDUHLQWHQWLRQDOO changing functionalitRIWKHVstem, whereas unit tests could
break during a refactoring or an implementation change.
The project contains two tSHVRIBBLQLWBBS files: those found under the src/ directorDQGWKRVe
found under tests/. The src/tasks/__init__.pILOHWHOOV3thon that the directorLVDSDFNDJH,t
also acts as the main interface to the package when someone uses import tasks. It contains code
to import specific functions from api.pVRWKDWFOLS and our test files can access package
functionalitOLNHWDVNVDGG LQVWHDGRIKDYLQJWRGRWDVNVDSLDGG .
The tests/func/__init__.pDQGWHVWVXQLWBBLQLWBBS files are empt7KH tell pWHVWWRJRXp
one directorWRORRNIRUWKHURRWRIWKHWHVWGLUHFWRU and to look for the pWHVWLQLILOH.
The pWHVWLQLILOHLVRSWLRQDO,WFRQWDLQVSURMHFWZLGHStest configuration. There should be at
most onlRQHRIWKHVHLQour project. It can contain directives that change the behavior of
pWHVWVXFKDVVHWWLQJXSDOLVWRIRSWLRQVWKDWZLOODOZDs be used. You’ll learn all about
pWHVWLQLLQ&KDSWHU ​
Configuration ​.
The conftest.pILOHLVDOVRRSWLRQDO,WLVFRQVLGHUHGE pWHVWDVDORFDOSOXJLQDQGFDQFRQWDLn
hook functions and fixtures. Hook functions are a waWRLQVHUWFRGHLQWRSDUWRIWKHStest
execution process to alter how pWHVWZRUNV)L[WXUHVDUHVHWXSDQGWHDUGRZQIXQFWLRQVWKDWUXn
before and after test functions, and can be used to represent resources and data used bWKHWHVWV.
(Fixtures are discussed in Chapter 3, ​
pytest Fixtures ​ and Chapter 4, ​ Builtin Fixtures ​, and hook
functions are discussed in Chapter 5, ​
Plugins ​.) Hook functions and fixtures that are used bWHVWs
in multiple subdirectories should be contained in tests/conftest.p<RXFDQKDYHPXOWLSOe
conftest.pILOHVIRUH[DPSOHou can have one at tests and one for each subdirectorXQGHr
tests.
If RXKDYHQWDOUHDG done so, RXFDQGRZQORDGDFRS of the source code for this project on
the book’s website. [5]
Alternativelou can work on RXURZQSURMHFWZLWKDVLPLODUVWUXFWXUH.
Installing a Package Locally
The test file, tests/test_task.pFRQWDLQVWKHWHVWVZHZRUNHGRQLQ ​
Running pytest ​, in files
test_three.pDQGWHVWBIRXUS. I’ve just renamed it here to something that makes more sense for
what it’s testing and copied everWKLQJLQWRRQHILOH,DOVRUHPRYHGWKHGHILQLWLRQRIWKH7DVk
data structure, because that reallEHORQJVLQDSLS.
Here is test_task.p:
ch2/tasks_proj/tests/unit/test_task.py\05028\051

​ ​"""Test the Task data type."""​
​ ​from​ tasks ​import​ Task


​ ​def​ test_asdict():
​ ​"""_asdict() should return a dictionary."""​
​ t_task = Task(​'do something'​, ​'okken'​, True, 21)
​ t_dict = t_task._asdict()
​ expected = {​'summary'​: ​'do something'​,
​ ​'owner'​: ​'okken'​,
​ ​'done'​: True,
​ ​'id'​: 21}
​ ​assert​ t_dict == expected


​ ​def​ test_replace():
​ ​"""replace() should change passed in fields."""​
​ t_before = Task(​'finish book'​, ​'brian'​, False)
​ t_after = t_before._replace(id=10, done=True)
​ t_expected = Task(​'finish book'​, ​'brian'​, True, 10)
​ ​assert​ t_after == t_expected


​ ​def​ test_defaults():
​ ​"""Using no parameters should invoke defaults."""​
​ t1 = Task()
​ t2 = Task(None, None, False, None)
​ ​assert​ t1 == t2


​ ​def​ test_member_access():
​ ​"""Check .field functionality of namedtuple."""​
​ t = Task(​'buy milk'​, ​'brian'​)
​ ​assert​ t.summar 'buy milk'​
​ ​assert​ t.owner == ​'brian'​
​ ​assert​ (t.done, t.id) == (False, None)
The test_task.pILOHKDVWKLVLPSRUWVWDWHPHQW: ​ ​from​ tasks ​import​ Task
The best waWRDOORZWKHWHVWVWREHDEOHWRLPSRUWWDVNVRUIURPWDVNVLPSRUWVRPHWKLQJLVWo
install tasks locallXVLQJSLS7KLVLVSRVVLEOHEHFDXVHWKHUHVDVHWXSS file present to direct\05029\051

pip.
Install tasks either bUXQQLQJSLSLQVWDOORUSLSLQVWDOOHIURPWKHWDVNVBSURMGLUHFWRU. Or Ru
can run pip install -e tasks_proj from one directorXS:​ ​$ ​​cd​​ ​​/path/to/code ​
​ ​$ ​​pip​​ ​​install ​​ ​​./tasks_proj/ ​
​ ​$ ​​pip​​ ​​install ​​ ​​--no-cache-dir ​​ ​​./tasks_proj/​
​ Processing ./tasks_proj
​ Collecting click (from tasks==0.1.0)
​ Downloading click-6.7-pS3-none-anZKO N%)
​ ​ ...​
​ Collecting tinGE IURPWDVNV )
​ Downloading tinGEWDUJz
​ Collecting six (from tasks==0.1.0)
​ Downloading six-1.10.0-pS3-none-anZKl
​ Installing collected packages: click, tinGEVL[WDVNs
​ Running setup.pLQVWDOOIRUWLQdb ... done
​ Running setup.pLQVWDOOIRUWDVNVGRQe
​ SuccessfullLQVWDOOHGFOLFNVL[WDVNVWLQdb-3.4.0
If RXRQO want to run tests against tasks, this command is fine. If RXZDQWWREHDEOHWRPRGLIy
the source code while tasks is installed, RXQHHGWRLQVWDOOLWZLWKWKHHRSWLRQ IRUHGLWDEOH : ​ ​$ ​​pip​​ ​​install ​​ ​​-e​​ ​​./tasks_proj/ ​
​ Obtaining file:///path/to/code/tasks_proj
​ Requirement alreadVDWLVILHGFOLFNLn
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHV IURPWDVNV )
​ Requirement alreadVDWLVILHGWLQdb in
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHV IURPWDVNV )
​ Requirement alreadVDWLVILHGVL[Ln
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHV IURPWDVNV )
​ Installing collected packages: tasks
​ Found existing installation: tasks 0.1.0
​ Uninstalling tasks-0.1.0:
​ SuccessfullXQLQVWDOOHGWDVNV0
​ Running setup.pGHYHORSIRUWDVNs
​ SuccessfullLQVWDOOHGWDVNs
Now let’s trUXQQLQJWHVWV: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/unit​
​ ​$ ​​pWHVt​​ ​​test_task.py​
​ ===================== test session starts ======================\05030\051

​ collected 4 items

​ test_task.p.

​ =================== 4 passed in 0.01 seconds ===================
The import worked! The rest of our tests can now safelXVHLPSRUWWDVNV1RZOHWVZULWHVRPe
tests.\05031\051

Using assert Statements
When RXZULWHWHVWIXQFWLRQVWKHQRUPDO3thon assert statement is RXUSULPDU tool to
communicate test failure. The simplicitRIWKLVZLWKLQStest is brilliant. It’s what drives a lot of
developers to use pWHVWRYHURWKHUIUDPHZRUNV.
If RXYHXVHGDQ other testing framework, RXYHSUREDEO seen various assert helper
functions. For example, the following is a list of a few of the assert forms and assert helper
functions:pWHVt unittest
assert something assertTrue(something)
assert a == b assertEqual(a, b)
assert a <= b assertLessEqual(a, b)
… …
With pWHVWou can use assert with anH[SUHVVLRQ,IWKHH[SUHVVLRQZRXOd
evaluate to False if converted to a bool, the test would fail.
pWHVWLQFOXGHVDIHDWXUHFDOOHGDVVHUWUHZULWLQJWKDWLQWHUFHSWVDVVHUWFDOOVDQGUHSODFHVWKHPZLWh
something that can tell RXPRUHDERXWZK RXUDVVHUWLRQVIDLOHG/HWVVHHKRZKHOSIXOWKLs
rewriting is bORRNLQJDWDIHZDVVHUWLRQIDLOXUHV:
ch2/tasks_proj/tests/unit/test_task_fail.py
​ ​"""Use the Task type to show test failures."""​
​ ​from​ tasks ​import​ Task


​ ​def​ test_task_equalit :
​ ​"""Different tasks should not be equal."""​
​ t1 = Task(​'sit there'​, ​'brian'​)
​ t2 = Task(​'do something'​, ​'okken'​)
​ ​assert​ t1 == t2


​ ​def​ test_dict_equalit :\05032\051

​ ​"""Different tasks compared as dicts should not be equal."""​
​ t1_dict = Task(​'make sandwich'​, ​'okken'​)._asdict()
​ t2_dict = Task(​'make sandwich'​, ​'okkem'​)._asdict()
​ ​assert​ t1_dict == t2_dict
All of these tests fail, but what’s interesting is the traceback information: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/unit​
​ ​$ ​​pWHVt​​ ​​test_task_fail.py​
​ ===================== test session starts =====================
​ collected 2 items

​ test_task_fail.p)F

​ ========================== FAILURES ===========================
​ _____________________ test_task_equalitBBBBBBBBBBBBBBBBBBBBB_

​ def test_task_equalit :
​ t1 = Task('sit there', 'brian')
​ t2 = Task('do something', 'okken')
​ > assert t1 == t2
​ E AssertionError: assert Task(summar DOVHLG 1RQH  =
​ Task(summar
DOVHLG 1RQH)
​ E At index 0 diff: 'sit there' != 'do something'
​ E Use -v to get the full diff

​ test_task_fail.p$VVHUWLRQ(UURr
​ _____________________ test_dict_equalitBBBBBBBBBBBBBBBBBBBBB_

​ def test_dict_equalit :
​ t1_dict = Task('make sandwich', 'okken')._asdict()
​ t2_dict = Task('make sandwich', 'okkem')._asdict()
​ > assert t1_dict == t2_dict
​ E AssertionError: assert OrderedDict([...('id', None)]) ==
​ OrderedDict([(...('id', None)])
​ E Omitting 3 identical items, use -v to show
​ E Differing items:
​ E {'owner': 'okken'} != {'owner': 'okkem'}
​ E Use -v to get the full diff

​ test_task_fail.p$VVHUWLRQ(UURr
​ ================== 2 failed in 0.06 seconds ===================\05033\051

Wow. That’s a lot of information. For each failing test, the exact line of failure is shown with a >
pointing to the failure. The E lines show RXH[WUDLQIRUPDWLRQDERXWWKHDVVHUWIDLOXUHWRKHOp
RXILJXUHRXWZKDWZHQWZURQJ.
I intentionallSXWWZRPLVPDWFKHVLQWHVWBWDVNBHTXDOLW(), but onlWKHILUVWZDVVKRZQLQWKe
previous code. Let’s trLWDJDLQZLWKWKHYIODJDVVXJJHVWHGLQWKHHUURUPHVVDJH:​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_task_fail.pWHVWBWDVNBHTXDOLWy​
​ ===================== test session starts =====================
​ collected 3 items

​ test_task_fail.pWHVWBWDVNBHTXDOLW FAILED

​ ========================== FAILURES ===========================
​ _____________________ test_task_equalitBBBBBBBBBBBBBBBBBBBBB_

​ def test_task_equalit :
​ t1 = Task('sit there', 'brian')
​ t2 = Task('do something', 'okken')
​ > assert t1 == t2
​ E AssertionError: assert Task(summar DOVHLG 1RQH  =
​ Task(summar
DOVHLG 1RQH)
​ E At index 0 diff: 'sit there' != 'do something'
​ E Full diff:
​ E - Task(summar
VLWWKHUH
RZQHU
EULDQ
GRQH )DOVHLG 1RQH)
​ E ? ^^^ ^^^ ^^^^
​ E + Task(summar
GRVRPHWKLQJ
RZQHU
RNNHQ
GRQH )DOVHLG 1RQH)
​ E ? +++ ^^^ ^^^ ^^^^

​ test_task_fail.p$VVHUWLRQ(UURr
​ ================== 1 failed in 0.07 seconds ===================
Well, I think that’s prettGDUQHGFRROStest not onlIRXQGERWKGLIIHUHQFHVEXWLWDOVRVKRZHd
us exactlZKHUHWKHGLIIHUHQFHVDUH.
This example onlXVHGHTXDOLW assert; manPRUHYDULHWLHVRIDVVHUWVWDWHPHQWVZLWKDZHVRPe
trace debug information are found on the pWHVWRUJZHEVLWH. [6]\05034\051

Expecting Exceptions
Exceptions maEHUDLVHGLQDIHZSODFHVLQWKH7DVNV$3,/HWVWDNHDTXLFNSHHNDWWKe
functions found in tasks/api.p:​ ​def​ add(task): ​# type: (Task) -> int ​
​ ​def​ get(task_id): ​# type: (int) -> Task ​
​ ​def​ list_tasks(owner=None): ​# type: (str|None) -> list of Task ​
​ ​def​ count(): ​# type: (None) -> int ​
​ ​def​ update(task_id, task): ​# type: (int, Task) -> None ​
​ ​def​ delete(task_id): ​# type: (int) -> None ​
​ ​def​ delete_all(): ​# type: () -> None ​
​ ​def​ unique_id(): ​# type: () -> int ​
​ ​def​ start_tasks_db(db_path, db_tSH # type: (str, str) -> None ​
​ ​def​ stop_tasks_db(): ​# type: () -> None ​
There’s an agreement between the CLI code in cli.pDQGWKH$3,FRGHLQDSLS as to what tSHs
will be sent to the API functions. These API calls are a place where I’d expect exceptions to be
raised if the tSHLVZURQJ.
To make sure these functions raise exceptions if called incorrectlOHWVXVHWKHZURQJWpe in a
test function to intentionallFDXVH7peError exceptions, and use with pWHVWUDLVHV H[SHFWHd
exception>), like this:
ch2/tasks_proj/tests/func/test_api_exceptions.py
​ ​import​ pWHVt
​ ​import​ tasks


​ ​def​ test_add_raises():
​ ​"""add() should raise an exception with wrong type param."""​
​ ​with​ pWHVWUDLVHV 7peError):
​ tasks.add(task=​'not a Task object'​)
In test_add_raises(), the with pWHVWUDLVHV 7peError): statement saVWKDWZKDWHYHULVLQWKHQH[t
block of code should raise a TSH(UURUH[FHSWLRQ,IQRH[FHSWLRQLVUDLVHGWKHWHVWIDLOV,IWKe
test raises a different exception, it fails.
We just checked for the tSHRIH[FHSWLRQLQWHVWBDGGBUDLVHV <RXFDQDOVRFKHFNWKe
parameters to the exception. For start_tasks_db(db_path, db_tSH QRWRQO does db_tSHQHHd
to be a string, it reallKDVWREHHLWKHUWLQ’ or ’mongo’. You can check to make sure the
exception message is correct bDGGLQJDVH[FLQIR:
ch2/tasks_proj/tests/func/test_api_exceptions.py
​ ​def​ test_start_tasks_db_raises():\05035\051

​ ​"""Make sure unsupported db raises an exception."""​
​ ​with​ pWHVWUDLVHV 9DOXH(UURU as​ excinfo:
​ tasks.start_tasks_db(​'some/great/path'​, ​'mysql'​)
​ exception_msg = excinfo.value.args[0]
​ ​assert​ exception_msg == ​"db_type must be a 'tiny' or 'mongo'"​
This allows us to look at the exception more closel7KHYDULDEOHQDPHou put after as (excinfo
in this case) is filled with information about the exception, and is of tSH([FHSWLRQ,QIR.
In our case, we want to make sure the first (and onl SDUDPHWHUWRWKHH[FHSWLRQPDWFKHVa
string.\05036\051

Marking Test Functions
pWHVWSURYLGHVDFRROPHFKDQLVPWROHWou put markers on test functions. A test can have more
than one marker, and a marker can be on multiple tests.
Markers make sense after RXVHHWKHPLQDFWLRQ/HWVVD we want to run a subset of our tests
as a quick “smoke test” to get a sense for whether or not there is some major break in the sVWHP.
Smoke tests are bFRQYHQWLRQQRWDOOLQFOXVLYHWKRURXJKWHVWVXLWHVEXWDVHOHFWVXEVHWWKDWFDn
be run quicklDQGJLYHDGHYHORSHUDGHFHQWLGHDRIWKHKHDOWKRIDOOSDUWVRIWKHVstem.
To add a smoke test suite to the Tasks project, we can add @mark.pWHVWVPRNHWRVRPHRIWKe
tests. Let’s add it to a couple of tests in test_api_exceptions.p QRWHWKDWWKHPDUNHUVVPRNHDQd
get aren’t built into pWHVW,MXVWPDGHWKHPXS :
ch2/tasks_proj/tests/func/test_api_exceptions.py
​ @pWHVWPDUNVPRNe
​ ​def​ test_list_raises():
​ ​"""list() should raise an exception with wrong type param."""​
​ ​with​ pWHVWUDLVHV 7peError):
​ tasks.list_tasks(owner=123)


​ @pWHVWPDUNJHt
​ @pWHVWPDUNVPRNe
​ ​def​ test_get_raises():
​ ​"""get() should raise an exception with wrong type param."""​
​ ​with​ pWHVWUDLVHV 7peError):
​ tasks.get(task_id=​'123'​)
Now, let’s run just those tests that are marked with -m marker_name: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-m​​ ​​'smoke'​​ ​​test_api_exceptions.py​
​ ===================== test session starts ======================
​ collected 7 items

​ test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D
​ test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D

​ ====================== 5 tests deselected ======================
​ ============ 2 passed, 5 deselected in 0.03 seconds ============
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-m​​ ​​'get'​​ ​​test_api_exceptions.py​
​ ===================== test session starts ======================\05037\051

​ collected 7 items

​ test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D

​ ====================== 6 tests deselected ======================
​ ============ 1 passed, 6 deselected in 0.01 seconds ============
Remember that -v is short for --verbose and lets us see the names of the tests that are run. Using -
m ’smoke’ runs both tests marked with @pWHVWPDUNVPRNH8VLQJPJHWUXQVWKHRQHWHVt
marked with @pWHVWPDUNJHW3UHWW straightforward.
It gets better. The expression after -m can use and, or, and not to combine multiple markers: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​-m​​ ​​'smoke and get'​​ ​​test_api_exceptions.py​
​ ===================== test session starts ======================
​ collected 7 items

​ test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D

​ ====================== 6 tests deselected ======================
​ ============ 1 passed, 6 deselected in 0.03 seconds ============
That time we onlUDQWKHWHVWWKDWKDGERWKVPRNHDQGJHWPDUNHUV:HFDQXVHQRWDVZHOO: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​-m​​ ​​'smoke and not get'​​ ​​test_api_exceptions.py​
​ ===================== test session starts ======================
​ collected 7 items

​ test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D

​ ====================== 6 tests deselected ======================
​ ============ 1 passed, 6 deselected in 0.03 seconds ============
The addition of -m ’smoke and not get’ selected the test that was marked with
@pWHVWPDUNVPRNHEXWQRW#Stest.mark.get.
Filling Out the Smoke Test
The previous tests don’t seem like a reasonable smoke test suite HW:HKDYHQWDFWXDOO touched
the database or added anWDVNV6XUHO a smoke test would do that.
Let’s add a couple of tests that look at adding a task, and use one of them as part of our smoke
test suite:
ch2/tasks_proj/tests/func/test_add.py
​ ​import​ pWHVt\05038\051

​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ ​def​ test_add_returns_valid_id():
​ ​"""tasks.add() should return an integer."""​
​ ​# GIVEN an initialized tasks db​
​ ​# WHEN a new task is added​
​ ​# THEN returned task_id is of type int​
​ new_task = Task(​'do something'​)
​ task_id = tasks.add(new_task)
​ ​assert​ isinstance(task_id, int)


​ @pWHVWPDUNVPRNe
​ ​def​ test_added_task_has_id_set():
​ ​"""Make sure the task_id field is set by tasks.add()."""​
​ ​# GIVEN an initialized tasks db​
​ ​# AND a new task is added​
​ new_task = Task(​'sit in chair'​, owner=​'me'​, done=True)
​ task_id = tasks.add(new_task)

​ ​# WHEN task is retrieved​
​ task_from_db = tasks.get(task_id)

​ ​# THEN task_id matches id field​
​ ​assert​ task_from_db.id == task_id
Both of these tests have the comment GIVEN an initialized tasks db, and HWWKHUHLVQRGDWDEDVe
initialized in the test. We can define a fixture to get the database initialized before the test and
cleaned up after the test:
ch2/tasks_proj/tests/func/test_add.py
​ @pWHVWIL[WXUH DXWRXVH 7UXH)
​ ​def​ initialized_tasks_db(tmpdir):
​ ​"""Connect to db before testing, disconnect after."""​
​ ​# Setup : start db​
​ tasks.start_tasks_db(str(tmpdir), ​'tiny'​)

​ ​LHOd​ ​# this is where the testing happens​

​ ​# Teardown : stop db​
​ tasks.stop_tasks_db()\05039\051

The fixture, tmpdir, used in this example is a builtin fixture. You’ll learn all about builtin fixtures
in Chapter 4, ​
Builtin Fixtures ​, and RXOOOHDUQDERXWZULWLQJour own fixtures and how they
work in Chapter 3, ​
pytest Fixtures ​, including the autouse parameter used here.
autouse as used in our test indicates that all tests in this file will use the fixture. The code before
the LHOGUXQVEHIRUHHDFKWHVWWKHFRGHDIWHUWKHield runs after the test. The LHOGFDQUHWXUn
data to the test if desired. You’ll look at all that and more in later chapters, but here we need
some waWRVHWXSWKHGDWDEDVHIRUWHVWLQJVR,FRXOGQWZDLWDQ longer to show RXDIL[WXUH.
(pWHVWDOVRVXSSRUWVROGIDVKLRQHGVHWXSDQGWHDUGRZQIXQFWLRQVOLNHZKDWLVXVHGLQXQLWWHVt
and nose, but theDUHQRWQHDUO as fun. However, if RXDUHFXULRXVWKH are described in
Appendix 5, ​
xUnit Fixtures ​.)
Let’s set aside fixture discussion for now and go to the top of the project and run our smoke test
suite:
​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-m​​ ​​'smoke'​
​ ===================== test session starts ======================
​ collected 56 items

​ tests/func/test_add.pWHVWBDGGHGBWDVNBKDVBLGBVHW3$66(D
​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D

​ ===================== 53 tests deselected ======================
​ =========== 3 passed, 53 deselected in 0.11 seconds ============
This shows that marked tests from different files can all run together.\05040\051

Skipping Tests
While the markers discussed in ​
Marking Test Functions ​ were names of RXURZQFKRRVLQJ,
pWHVWLQFOXGHVDIHZKHOSIXOEXLOWLQPDUNHUVVNLSVNLSLIDQG[IDLO,OOGLVFXVVVNLSDQGVNLSLILn
this section, and xfail in the next.
The skip and skipif markers enable RXWRVNLSWHVWVou don’t want to run. For example, let’s say
we weren’t sure how tasks.unique_id() was supposed to work. Does each call to it return a
different number? Or is it just a number that doesn’t exist in the database alread?
First, let’s write a test (note that the initialized_tasks_db fixture is in this file, too; it’s just not
shown here):
ch2/tasks_proj/tests/func/test_unique_id_1.py
​ ​import​ pWHVt
​ ​import​ tasks


​ ​def​ test_unique_id():
​ ​"""Calling unique_id() twice should return different numbers."""​
​ id_1 = tasks.unique_id()
​ id_2 = tasks.unique_id()
​ ​assert​ id_1 != id_2
Then give it a run: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​test_unique_id_1.py​
​ ===================== test session starts ======================
​ collected 1 item s

​ test_unique_id_1.pF

​ =========================== FAILURES ===========================
​ ________________________ test_unique_id ________________________

​ def test_unique_id():
​ """Calling unique_id() twice should return different numbers."""
​ id_1 = tasks.unique_id()
​ id_2 = tasks.unique_id()
​ > assert id_1 != id_2
​ E assert 1 != 1
​ \05041\051

​ test_unique_id_1.p$VVHUWLRQ(UURr
​ =================== 1 failed in 0.06 seconds ===================
Hmm. MaEHZHJRWWKDWZURQJ$IWHUORRNLQJDWWKH$3,DELWPRUHZHVHHWKDWWKHGRFVWULQg
saV5HWXUQDQLQWHJHUWKDWGRHVQRWH[LVWLQWKHGE.
We could just change the test. But instead, let’s just mark the first one to get skipped for now:
ch2/tasks_proj/tests/func/test_unique_id_2.py
​ @pWHVWPDUNVNLS UHDVRQ 'misunderstood the API'​)
​ ​def​ test_unique_id_1():
​ ​"""Calling unique_id() twice should return different numbers."""​
​ id_1 = tasks.unique_id()
​ id_2 = tasks.unique_id()
​ ​assert​ id_1 != id_2


​ ​def​ test_unique_id_2():
​ ​"""unique_id() should return an unused id."""​
​ ids = []
​ ids.append(tasks.add(Task(​'one'​)))
​ ids.append(tasks.add(Task(​'two'​)))
​ ids.append(tasks.add(Task(​'three'​)))
​ ​# grab a unique id​
​ uid = tasks.unique_id()
​ ​# make sure it isn't in the list of existing ids​
​ ​assert​ uid ​not​ ​in​ ids
Marking a test to be skipped is as simple as adding @pWHVWPDUNVNLS MXVWDERYHWKHWHVt
function.
Let’s run again: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_unique_id_2.py​
​ =========================== test session starts ===========================
​ collected 2 items

​ test_unique_id_2.pWHVWBXQLTXHBLGB6.,33(D
​ test_unique_id_2.pWHVWBXQLTXHBLGB3$66(D

​ =================== 1 passed, 1 skipped in 0.02 seconds ===================
Now, let’s saWKDWIRUVRPHUHDVRQZHGHFLGHWKHILUVWWHVWVKRXOGEHYDOLGDOVRDQGZHLQWHQGWo
make that work in version 0.2.0 of the package. We can leave the test in place and use skipif
instead:\05042\051

ch2/tasks_proj/tests/func/test_unique_id_3.py
​ @pWHVWPDUNVNLSLI WDVNVBBYHUVLRQBB'0.2.0'​,
​ reason=​'not supported until version 0.2.0'​)
​ ​def​ test_unique_id_1():
​ ​"""Calling unique_id() twice should return different numbers."""​
​ id_1 = tasks.unique_id()
​ id_2 = tasks.unique_id()
​ ​assert​ id_1 != id_2
The expression we pass into skipif() can be anYDOLG3thon expression. In this case, we’re
checking the package version.
We included reasons in both skip and skipif. It’s not required in skip, but it is required in skipif. I
like to include a reason for everVNLSVNLSLIRU[IDLO.
Here’s the output of the changed code: ​ ​$ ​​pWHVt​​ ​​test_unique_id_3.py​
​ =========================== test session starts ===========================
​ collected 2 items

​ test_unique_id_3.pV.

​ =================== 1 passed, 1 skipped in 0.02 seconds ===================
The s. shows that one test was skipped and one test passed.
We can see which one with -v: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_unique_id_3.py​
​ =========================== test session starts ===========================
​ collected 2 items

​ test_unique_id_3.pWHVWBXQLTXHBLGB6.,33(D
​ test_unique_id_3.pWHVWBXQLTXHBLGB3$66(D

​ =================== 1 passed, 1 skipped in 0.03 seconds ===================
But we still don’t know wh:HFDQVHHWKRVHUHDVRQVZLWKUV: ​ ​$ ​​pWHVt​​ ​​-rs​​ ​​test_unique_id_3.py​
​ ======================== test session starts ========================
​ collected 2 items

​ test_unique_id_3.pV.\05043\051

​ ====================== short test summarLQIR =
​ SKIP [1] func/test_unique_id_3.pQRWVXSSRUWHGXQWLOYHUVLRQ0

​ ================ 1 passed, 1 skipped in 0.03 seconds ================
The -r chars option has this help text: ​ ​$ ​​pWHVt​​ ​​--help​
​ ​...​
​ -r chars

​ show extra test summarLQIRDVVSHFLILHGE chars
​ (f)ailed, (E)error, (s)skipped, (x)failed, (X)passed,
​ (p)passed, (P)passed with output, (a)all except pP.
​ ​...​
It’s not onlKHOSIXOIRUXQGHUVWDQGLQJWHVWVNLSVEXWDOVRou can use it for other test outcomes
as well.\05044\051

Marking Tests as Expecting to Fail
With the skip and skipif markers, a test isn’t even attempted if skipped. With the xfail marker,
we are telling pWHVWWRUXQDWHVWIXQFWLRQEXWWKDWZHH[SHFWLWWRIDLO/HWVPRGLI our
unique_id() test again to use xfail:
ch2/tasks_proj/tests/func/test_unique_id_4.py
​ @pWHVWPDUN[IDLO WDVNVBBYHUVLRQBB'0.2.0'​,
​ reason=​'not supported until version 0.2.0'​)
​ ​def​ test_unique_id_1():
​ ​"""Calling unique_id() twice should return different numbers."""​
​ id_1 = tasks.unique_id()
​ id_2 = tasks.unique_id()
​ ​assert​ id_1 != id_2


​ @pWHVWPDUN[IDLO )
​ ​def​ test_unique_id_is_a_duck():
​ ​"""Demonstrate xfail."""​
​ uid = tasks.unique_id()
​ ​assert​ uid == ​'a duck'​


​ @pWHVWPDUN[IDLO )
​ ​def​ test_unique_id_not_a_duck():
​ ​"""Demonstrate xpass."""​
​ uid = tasks.unique_id()
​ ​assert​ uid != ​'a duck'​
The first test is the same as before, but with xfail. The next two tests are listed as xfail, and differ
onlE == vs. !=. So one of them is bound to pass.
Running this shows: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​test_unique_id_4.py​
​ ======================== test session starts ========================
​ collected 4 items

​ test_unique_id_4.p[[;.

​ ========== 1 passed, 2 xfailed, 1 xpassed in 0.07 seconds ===========\05045\051

The x is for XFAIL, which means “expected to fail.” The capital X is for XPASS or “expected to
fail but passed.”
--verbose lists longer descriptions:​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_unique_id_4.py​
​ ======================== test session starts ========================
​ collected 4 items

​ test_unique_id_4.pWHVWBXQLTXHBLGB[IDLl
​ test_unique_id_4.pWHVWBXQLTXHBLGBLVBDBGXFN[IDLl
​ test_unique_id_4.pWHVWBXQLTXHBLGBQRWBDBGXFN;3$6S
​ test_unique_id_4.pWHVWBXQLTXHBLGB3$66(D

​ ========== 1 passed, 2 xfailed, 1 xpassed in 0.08 seconds ===========
You can configure pWHVWWRUHSRUWWKHWHVWVWKDWSDVVEXWZHUHPDUNHGZLWK[IDLOWREHUHSRUWHGDs
FAIL. This is done in a pWHVWLQLILOH: ​ [pWHVW]
​ xfail_strict=true
I’ll discuss pWHVWLQLPRUHLQ&KDSWHU ​
Configuration ​.\05046\051

Running a Subset of Tests
I’ve talked about how RXFDQSODFHPDUNHUVRQWHVWVDQGUXQWHVWVEDVHGRQPDUNHUV<RXFDn
run a subset of tests in several other waV<RXFDQUXQDOORIWKHWHVWVRUou can select a single
directorILOHFODVVZLWKLQDILOHRUDQLQGLYLGXDOWHVWLQDILOHRUFODVV<RXKDYHQWVHHQWHVt
classes used HWVRou’ll look at one in this section. You can also use an expression to match
test names. Let’s take a look at these.
A Single Directory
To run all the tests from one directorXVHWKHGLUHFWRU as a parameter to pWHVW:​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​tests/func ​​ ​​--tb=no​
​ ===================== test session starts ======================
​ collected 50 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ................................
​ tests/func/test_api_exceptions.p.
​ tests/func/test_unique_id_1.pF
​ tests/func/test_unique_id_2.pV.
​ tests/func/test_unique_id_3.pV.
​ tests/func/test_unique_id_4.p[[;.

​ 1 failed, 44 passed, 2 skipped, 2 xfailed, 1 xpassed in 0.26 seconds
An important trick to learn is that using -v gives RXWKHVntax for how to run a specific
directorFODVVDQGWHVW. ​ ​$ ​​pWHVt​​ ​​-v​​ ​​tests/func ​​ ​​--tb=no​
​ ===================== test session starts ======================
​ collected 50 items

​ tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG3$66(D
​ tests/func/test_add.pWHVWBDGGHGBWDVNBKDVBLGBVHW3$66(D
​ ​...​
​ tests/func/test_api_exceptions.pWHVWBDGGBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D
​ ​...​
​ tests/func/test_unique_id_1.pWHVWBXQLTXHBLG)$,/(D\05047\051

​ tests/func/test_unique_id_2.pWHVWBXQLTXHBLGB6.,33(D
​ tests/func/test_unique_id_2.pWHVWBXQLTXHBLGB3$66(D
​ ​...​
​ tests/func/test_unique_id_4.pWHVWBXQLTXHBLGB[IDLl
​ tests/func/test_unique_id_4.pWHVWBXQLTXHBLGBLVBDBGXFN[IDLl
​ tests/func/test_unique_id_4.pWHVWBXQLTXHBLGBQRWBDBGXFN;3$6S
​ tests/func/test_unique_id_4.pWHVWBXQLTXHBLGB3$66(D

​ 1 failed, 44 passed, 2 skipped, 2 xfailed, 1 xpassed in 0.30 seconds
You’ll see the sQWD[OLVWHGKHUHLQWKHQH[WIHZH[DPSOHV.
A Single Test File/Module
To run a file full of tests, list the file with the relative path as a parameter to pWHVW: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​tests/func/test_add.py​
​ =========================== test session starts ===========================
​ collected 2 items

​ tests/func/test_add.p.

​ ======================== 2 passed in 0.05 seconds =========================
We’ve been doing this for a while.
A Single Test Function
To run a single test function, add :: and the test function name: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLd​
​ =========================== test session starts ===========================
​ collected 3 items

​ tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG3$66(D

​ ======================== 1 passed in 0.02 seconds =========================
Use -v so RXFDQVHHZKLFKIXQFWLRQZDVUXQ.
A Single Test Class\05048\051

Test classes are a waWRJURXSWHVWVWKDWPDNHVHQVHWREHJURXSHGWRJHWKHU+HUHVDQH[DPSOH:
ch2/tasks_proj/tests/func/test_api_exceptions.py
​ ​class​ TestUpdate():
​ ​"""Test expected exceptions with tasks.update()."""​

​ ​def​ test_bad_id(self):
​ ​"""A non-int id should raise an excption."""​
​ ​with​ pWHVWUDLVHV 7peError):
​ tasks.update(task_id={​'dict instead'​: 1},
​ task=tasks.Task())

​ ​def​ test_bad_task(self):
​ ​"""A non-Task task should raise an excption."""​
​ ​with​ pWHVWUDLVHV 7peError):
​ tasks.update(task_id=1, task=​'not a task'​)
Since these are two related tests that both test the update() function, it’s reasonable to group them
in a class. To run just this class, do like we did with functions and add ::, then the class name to
the file parameter: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​tests/func/test_api_exceptions.p7HVW8SGDWe ​
​ =========================== test session starts ===========================
​ collected 7 items

​ tests/func/test_api_exceptions.p7HVW8SGDWHWHVWBEDGBLG3$66(D
​ tests/func/test_api_exceptions.p7HVW8SGDWHWHVWBEDGBWDVN3$66(D

​ ======================== 2 passed in 0.03 seconds =========================
A Single Test Method of a Test Class
If RXGRQWZDQWWRUXQDOORIDWHVWFODVVMXVWRQHPHWKRGMXVWDGGDQRWKHUDQGWKHPHWKRd
name: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​tests/func/test_api_exceptions.p7HVW8SGDWHWHVWBEDGBLd​
​ ===================== test session starts ======================
​ collected 1 item

​ tests/func/test_api_exceptions.p7HVW8SGDWHWHVWBEDGBLG3$66(D

​ =================== 1 passed in 0.03 seconds ===================\05049\051

Grouping SQWD[6KRZQE Verbose Listing
Remember that the sQWD[IRUKRZWRUXQDVXEVHWRIWHVWVE directorILOH,
function, class, and method doesn’t have to be memorized. The format is the same
as the test function listing when RXUXQStest -v.
A Set of Tests Based on Test Name
The -k option enables RXWRSDVVLQDQH[SUHVVLRQWRUXQWHVWVWKDWKDYHFHUWDLQQDPHVVSHFLILHd
bWKHH[SUHVVLRQDVDVXEVWULQJRIWKHWHVWQDPH<RXFDQXVHDQGRUDQGQRWLQour expression
to create complex expressions.
For example, we can run all of the functions that have _raises in their name: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-k​​ ​​_raises​
​ ===================== test session starts ======================
​ collected 56 items

​ tests/func/test_api_exceptions.pWHVWBDGGBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBGHOHWHBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBVWDUWBWDVNVBGEBUDLVHV3$66(D

​ ===================== 51 tests deselected ======================
​ =========== 5 passed, 51 deselected in 0.07 seconds ============
We can use and and not to get rid of the test_delete_raises() from the session: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​-k​​ ​​"_raises and not delete"​
​ ===================== test session starts ======================
​ collected 56 items

​ tests/func/test_api_exceptions.pWHVWBDGGBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBJHWBUDLVHV3$66(D
​ tests/func/test_api_exceptions.pWHVWBVWDUWBWDVNVBGEBUDLVHV3$66(D

​ ===================== 52 tests deselected ======================
​ =========== 4 passed, 52 deselected in 0.06 seconds ============\05050\051

In this section, RXOHDUQHGKRZWRUXQVSHFLILFWHVWILOHVGLUHFWRULHVFODVVHVDQGIXQFWLRQVDQd
how to use expressions with -k to run specific sets of tests. In the next section, RXOOOHDUQKRw
one test function can turn into manWHVWFDVHVE allowing the test to run multiple times with
different test data.\05051\051

Parametrized Testing
Sending some values through a function and checking the output to make sure it’s correct is a
common pattern in software testing. However, calling a function once with one set of values and
one check for correctness isn’t enough to fullWHVWPRVWIXQFWLRQV3DUDPHWUL]HGWHVWLQJLVDZDy
to send multiple sets of data through the same test and have pWHVWUHSRUWLIDQ of the sets failed.
To help understand the problem parametrized testing is trLQJWRVROYHOHWVWDNHDVLPSOHWHVWIRr
add():
ch2/tasks_proj/tests/func/test_add_varietSy
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ ​def​ test_add_1():
​ ​"""tasks.get() using id returned from add() works."""​
​ task = Task(​'breathe'​, ​'BRIAN'​, True)
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​# everything but the id should be the same​
​ ​assert​ equivalent(t_from_db, task)


​ ​def​ equivalent(t1, t2):
​ ​"""Check two tasks for equivalence."""​
​ ​# Compare everything but the id field​
​ ​return​ ((t1.summar WVXPPDU) ​and​
​ (t1.owner == t2.owner) ​and​
​ (t1.done == t2.done))


​ @pWHVWIL[WXUH DXWRXVH 7UXH)
​ ​def​ initialized_tasks_db(tmpdir):
​ ​"""Connect to db before testing, disconnect after."""​
​ tasks.start_tasks_db(str(tmpdir), ​'tiny'​)
​ ​LHOd​
​ tasks.stop_tasks_db()
When a Task object is created, its id field is set to None. After it’s added and retrieved from the
database, the id field will be set. Therefore, we can’t just use == to check to see if our task was
added and retrieved correctl7KHHTXLYDOHQW KHOSHUIXQFWLRQFKHFNVDOOEXWWKHLGILHOG7Ke\05052\051

autouse fixture is included to make sure the database is accessible. Let’s make sure the test
passes:​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_1​
​ ===================== test session starts ======================
​ collected 1 item

​ test_add_varietS::test_add_1 PASSED

​ =================== 1 passed in 0.03 seconds ===================
The test seems reasonable. However, it’s just testing one example task. What if we want to test
lots of variations of a task? No problem. We can use @pWHVWPDUNSDUDPHWUL]H DUJQDPHV,
argvalues) to pass lots of data through the same test, like this:
ch2/tasks_proj/tests/func/test_add_varietSy
​ @pWHVWPDUNSDUDPHWUL]H 'task'​,
​ [Task(​'sleep'​, done=True),
​ Task(​'wake'​, ​'brian'​),
​ Task(​'breathe'​, ​'BRIAN'​, True),
​ Task(​'exercise'​, ​'BrIaN'​, False)])
​ ​def​ test_add_2(task):
​ ​"""Demonstrate parametrize with one parameter."""​
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, task)
The first argument to parametrize() is a string with a comma-separated list of names—’task’, in
our case. The second argument is a list of values, which in our case is a list of Task objects.
pWHVWZLOOUXQWKLVWHVWRQFHIRUHDFKWDVNDQGUHSRUWHDFKDVDVHSDUDWHWHVW: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_2​
​ ===================== test session starts ======================
​ collected 4 items

​ test_add_varietS::test_add_2[task0] PASSED
​ test_add_varietS::test_add_2[task1] PASSED
​ test_add_varietS::test_add_2[task2] PASSED
​ test_add_varietS::test_add_2[task3] PASSED

​ =================== 4 passed in 0.05 seconds ===================
This use of parametrize() works for our purposes. However, let’s pass in the tasks as tuples to see\05053\051

how multiple test parameters would work:
ch2/tasks_proj/tests/func/test_add_varietSy
​ @pWHVWPDUNSDUDPHWUL]H 'summary, owner, done'​,
​ [(​'sleep'​, None, False),
​ (​'wake'​, ​'brian'​, False),
​ (​'breathe'​, ​'BRIAN'​, True),
​ (​'eat eggs'​, ​'BrIaN'​, False),
​ ])
​ ​def​ test_add_3(summarRZQHUGRQH :
​ ​"""Demonstrate parametrize with multiple parameters."""​
​ task = Task(summarRZQHUGRQH)
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, task)
When RXXVHWpes that are easIRUStest to convert into strings, the test identifier uses the
parameter values in the report to make it readable: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_3​
​ ===================== test session starts ======================
​ collected 4 items

​ test_add_varietS::test_add_3[sleep-None-False] PASSED
​ test_add_varietS::test_add_3[wake-brian-False] PASSED
​ test_add_varietS::test_add_3[breathe-BRIAN-True] PASSED
​ test_add_varietS::test_add_3[eat eggs-BrIaN-False] PASSED

​ =================== 4 passed in 0.05 seconds ===================
You can use that whole test identifier—called a node in pWHVWWHUPLQRORJ—to re-run the test if
RXZDQW: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_3[sleep-None-False]​
​ ===================== test session starts ======================
​ collected 1 item

​ test_add_varietS::test_add_3[sleep-None-False] PASSED

​ =================== 1 passed in 0.02 seconds ===================
Be sure to use quotes if there are spaces in the identifier:\05054\051

​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​"test_add_variety.py::test_add_3[eat eggs-BrIaN-False]"​
​ ===================== test session starts ======================
​ collected 1 item

​ test_add_varietS::test_add_3[eat eggs-BrIaN-False] PASSED

​ =================== 1 passed in 0.03 seconds ===================
Now let’s go back to the list of tasks version, but move the task list to a variable outside the
function:
ch2/tasks_proj/tests/func/test_add_varietSy
​ tasks_to_tr  7DVN 'sleep'​, done=True),
​ Task(​'wake'​, ​'brian'​),
​ Task(​'wake'​, ​'brian'​),
​ Task(​'breathe'​, ​'BRIAN'​, True),
​ Task(​'exercise'​, ​'BrIaN'​, False))


​ @pWHVWPDUNSDUDPHWUL]H 'task'​, tasks_to_tr)
​ ​def​ test_add_4(task):
​ ​"""Slightly different take."""​
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, task)
It’s convenient and the code looks nice. But the readabilitRIWKHRXWSXWLVKDUGWRLQWHUSUHW: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_4​
​ ===================== test session starts ======================
​ collected 5 items

​ test_add_varietS::test_add_4[task0] PASSED
​ test_add_varietS::test_add_4[task1] PASSED
​ test_add_varietS::test_add_4[task2] PASSED
​ test_add_varietS::test_add_4[task3] PASSED
​ test_add_varietS::test_add_4[task4] PASSED

​ =================== 5 passed in 0.05 seconds ===================
The readabilitRIWKHPXOWLSOHSDUDPHWHUYHUVLRQLVQLFHEXWVRLVWKHOLVWRI7DVNREMHFWV7o
compromise, we can use the ids optional parameter to parametrize() to make our own identifiers\05055\051

for each task data set. The ids parameter needs to be a list of strings the same length as the
number of data sets. However, because we assigned our data set to a variable name, tasks_to_tr,
we can use it to generate ids:
ch2/tasks_proj/tests/func/test_add_varietSy
​ task_ids = [​'Task({},{},{})'​.format(t.summarWRZQHUWGRQH)
​ ​for​ t ​in​ tasks_to_tr]


​ @pWHVWPDUNSDUDPHWUL]H 'task'​, tasks_to_trLGV WDVNBLGV)
​ ​def​ test_add_5(task):
​ ​"""Demonstrate ids."""​
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, task)
Let’s run that and see how it looks: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_5​
​ ===================== test session starts ======================
​ collected 5 items

​ test_add_varietS::test_add_5[Task(sleep,None,True)] PASSED
​ test_add_varietS::test_add_5[Task(wake,brian,False)0] PASSED
​ test_add_varietS::test_add_5[Task(wake,brian,False)1] PASSED
​ test_add_varietS::test_add_5[Task(breathe,BRIAN,True)] PASSED
​ test_add_varietS::test_add_5[Task(exercise,BrIaN,False)] PASSED

​ =================== 5 passed in 0.04 seconds ===================
And these test identifiers can be used to run tests: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​"test_add_variety.py::test_add_5[Task(exercise,BrIaN,False)]"​
​ ===================== test session starts ======================
​ collected 1 item

​ test_add_varietS::test_add_5[Task(exercise,BrIaN,False)] PASSED

​ =================== 1 passed in 0.03 seconds ===================
We definitelQHHGTXRWHVIRUWKHVHLGHQWLILHUVRWKHUZLVHWKHEUDFNHWVDQGSDUHQWKHVHVZLOl
confuse the shell.\05056\051

You can applSDUDPHWUL]H WRFODVVHVDVZHOO:KHQou do that, the same data sets will be sent
to all test methods in the class:
ch2/tasks_proj/tests/func/test_add_varietSy
​ @pWHVWPDUNSDUDPHWUL]H 'task'​, tasks_to_trLGV WDVNBLGV)
​ ​class​ TestAdd():
​ ​"""Demonstrate parametrize and test classes."""​

​ ​def​ test_equivalent(self, task):
​ ​"""Similar test, just within a class."""​
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, task)
​ ​def​ test_valid_id(self, task):
​ ​"""We can use the same data or multiple tests."""​
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ t_from_db.id == task_id
Here it is in action: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::TestAdd​
​ ===================== test session starts ======================
​ collected 10 items

​ test_add_varietS::TestAdd::test_equivalent[Task(sleep,None,True)] PASSED
​ test_add_varietS::TestAdd::test_equivalent[Task(wake,brian,False)0] PASSED
​ test_add_varietS::TestAdd::test_equivalent[Task(wake,brian,False)1] PASSED
​ test_add_varietS::TestAdd::test_equivalent[Task(breathe,BRIAN,True)] PASSED
​ test_add_varietS::TestAdd::test_equivalent[Task(exercise,BrIaN,False)] PASSED
​ test_add_varietS::TestAdd::test_valid_id[Task(sleep,None,True)] PASSED
​ test_add_varietS::TestAdd::test_valid_id[Task(wake,brian,False)0] PASSED
​ test_add_varietS::TestAdd::test_valid_id[Task(wake,brian,False)1] PASSED
​ test_add_varietS::TestAdd::test_valid_id[Task(breathe,BRIAN,True)] PASSED
​ test_add_varietS::TestAdd::test_valid_id[Task(exercise,BrIaN,False)] PASSED

​ ================== 10 passed in 0.08 seconds ===================
You can also identifSDUDPHWHUVE including an id right alongside the parameter value when
passing in a list within the @pWHVWPDUNSDUDPHWUL]H GHFRUDWRU<RXGRWKLVZLWh
pWHVWSDUDP YDOXH!LG VRPHWKLQJ Vntax:
ch2/tasks_proj/tests/func/test_add_varietSy\05057\051

​ @pWHVWPDUNSDUDPHWUL]H 'task'​, [
​ pWHVWSDUDP 7DVN 'create'​), id=​'just summary'​),
​ pWHVWSDUDP 7DVN 'inspire'​, ​'Michelle'​), id=​'summary/owner'​),
​ pWHVWSDUDP 7DVN 'encourage'​, ​'Michelle'​, True), id=​'summary/owner/done'​)])
​ ​def​ test_add_6(task):
​ ​"""Demonstrate pytest.param and id."""​
​ task_id = tasks.add(task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, task)
In action: ​ ​$ ​​cd​​ ​​/path/to/code/ch2/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_6​
​ =================== test session starts ====================
​ collected 3 items

​ test_add_varietS::test_add_6[just summar@3$66(D
​ test_add_varietS::test_add_6[summarRZQHU@3$66(D
​ test_add_varietS::test_add_6[summarRZQHUGRQH@3$66(D

​ ================= 3 passed in 0.05 seconds =================
This is useful when the id cannot be derived from the parameter value.\05058\051

Exercises1. Download the project for this chapter, tasks_proj, from the book’s webpage [7]
and make
sure RXFDQLQVWDOOLWORFDOO with pip install /path/to/tasks_proj.
2. Explore the tests director.
3. Run pWHVWZLWKDVLQJOHILOH.
4. Run pWHVWDJDLQVWDVLQJOHGLUHFWRU, such as tasks_proj/tests/func. Use pWHVWWRUXQWHVWs individuallDVZHOODVDGLUHFWRU full at a time. There are some failing tests there. Do Ru
understand whWKH fail?
5. Add xfail or skip markers to the failing tests until RXFDQUXQStest from the tests directorZLWKQRDUJXPHQWVDQGQRIDLOXUHV.
6. We don’t have anWHVWVIRUWDVNVFRXQW et, among other functions. Pick an untested API function and think of which test cases we need to have to make sure it works correctl.
7. What happens if RXWU to add a task with the id alreadVHW"7KHUHDUHVRPHPLVVLQg exception tests in test_api_exceptions.p6HHLIou can fill in the missing exceptions. (It’s
okaWRORRNDWDSLS for this exercise.)\05059\051

What’s Next
You’ve run through a lot of the power of pWHVWLQWKLVFKDSWHU(YHQZLWKMXVWZKDWVFRYHUHd
here, RXFDQVWDUWVXSHUFKDUJLQJour test suites. In manRIWKHH[DPSOHVou used a fixture
called initialized_tasks_db. Fixtures can separate retrieving and/or generating test data from the
real guts of a test function. TheFDQDOVRVHSDUDWHFRPPRQFRGHVRWKDWPXOWLSOHWHVWIXQFWLRQs
can use the same setup. In the next chapter, RXOOWDNHDGHHSGLYHLQWRWKHZRQGHUIXOZRUOGRf
pWHVWIL[WXUHV.
Footnotes
[5]
https://pragprog.com/titles/bopWHVWVRXUFHBFRGe
[6]
http://doc.pWHVWRUJHQODWHVWH[DPSOHUHSRUWLQJGHPRKWPl
[7]
https://pragprog.com/titles/bopWHVWVRXUFHBFRGe
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.\05060\051

Chapter 3
pWHVW)L[WXUHs
Now that RXYHVHHQWKHEDVLFVRIStest, let’s turn our attention to fixtures, which are essential
to structuring test code for almost anQRQWULYLDOVRIWZDUHVstem. Fixtures are functions that are
run bStest before (and sometimes after) the actual test functions. The code in the fixture can
do whatever RXZDQWLWWR<RXFDQXVHIL[WXUHVWRJHWDGDWDVHWIRUWKHWHVWVWRZRUNRQ<Ru
can use fixtures to get a sVWHPLQWRDNQRZQVWDWHEHIRUHUXQQLQJDWHVW)L[WXUHVDUHDOVRXVHGWo
get data readIRUPXOWLSOHWHVWV.
Here’s a simple fixture that returns a number:
ch3/test_fixtures.py
​ ​import​ pWHVt


​ @pWHVWIL[WXUH )
​ ​def​ some_data():
​ ​"""Return answer to ultimate question."""​
​ ​return​ 42


​ ​def​ test_some_data(some_data):
​ ​"""Use fixture return value in a test."""​
​ ​assert​ some_data == 42
The @pWHVWIL[WXUH GHFRUDWRULVXVHGWRWHOOStest that a function is a fixture. When Ru
include the fixture name in the parameter list of a test function, pWHVWNQRZVWRUXQLWEHIRUe
running the test. Fixtures can do work, and can also return data to the test function.
The test test_some_data() has the name of the fixture, some_data, as a parameter. pWHVWZLOOVHe
this and look for a fixture with this name. Naming is significant in pWHVWStest will look in the
module of the test for a fixture of that name. It will also look in conftest.pILOHVLILWGRHVQWILQd
it in this file.
Before we start our exploration of fixtures (and the conftest.pILOH ,QHHGWRDGGUHVVWKHIDFt
that the term fixture has manPHDQLQJVLQWKHSURJUDPPLQJDQGWHVWFRPPXQLW, and even in the
PWKRQFRPPXQLW. I use “fixture,” “fixture function,” and “fixture method” interchangeablWo
refer to the @pWHVWIL[WXUH GHFRUDWHGIXQFWLRQVGLVFXVVHGLQWKLVFKDSWHU)L[WXUHFDQDOVREe
used to refer to the resource that is being set up bWKHIL[WXUHIXQFWLRQV)L[WXUHIXQFWLRQVRIWHn
set up or retrieve some data that the test can work with. Sometimes this data is considered a
fixture. For example, the Django communitRIWHQXVHVIL[WXUHWRPHDQVRPHLQLWLDOGDWDWKDWJHWs
loaded into a database at the start of an application.
Regardless of other meanings, in pWHVWDQGLQWKLVERRNWHVWIL[WXUHVUHIHUWRWKHPHFKDQLVm\05061\051

pWHVWSURYLGHVWRDOORZWKHVHSDUDWLRQRIJHWWLQJUHDG for” and “cleaning up after” code from
RXUWHVWIXQFWLRQV.
pWHVWIL[WXUHVDUHRQHRIWKHXQLTXHFRUHIHDWXUHVWKDWPDNHStest stand out above other test
frameworks, and are the reason whPDQ people switch to and staZLWKStest. However,
fixtures in pWHVWDUHGLIIHUHQWWKDQIL[WXUHVLQ'MDQJRDQGGLIIHUHQWWKDQWKHVHWXSDQGWHDUGRZn
procedures found in unittest and nose. There are a lot of features and nuances about fixtures.
Once RXJHWDJRRGPHQWDOPRGHORIKRZWKH work, theZLOOVHHPHDV to RX+RZHYHUou
have to plaZLWKWKHPDZKLOHWRJHWWKHUHVROHWVJHWVWDUWHG.\05062\051

Sharing Fixtures Through conftest.py
You can put fixtures into individual test files, but to share fixtures among multiple test files, Ru
need to use a conftest.pILOHVRPHZKHUHFHQWUDOO located for all of the tests. For the Tasks
project, all of the fixtures will be in tasks_proj/tests/conftest.p.
From there, the fixtures can be shared bDQ test. You can put fixtures in individual test files if
RXZDQWWKHIL[WXUHWRRQO be used bWHVWVLQWKDWILOH/LNHZLVHou can have other conftest.py
files in subdirectories of the top tests director,Iou do, fixtures defined in these lower-level
conftest.pILOHVZLOOEHDYDLODEOHWRWHVWVLQWKDWGLUHFWRU and subdirectories. So far, however,
the fixtures in the Tasks project are intended to be available to anWHVW7KHUHIRUHSXWWLQJDOORf
our fixtures in the conftest.pILOHDWWKHWHVWURRWWDVNVBSURMWHVWVPDNHVWKHPRVWVHQVH.
Although conftest.pLVD3thon module, it should not be imported bWHVWILOHV'RQWLPSRUt
conftest from anZKHUH7KHFRQIWHVWS file gets read bStest, and is considered a local plugin,
which will make sense once we start talking about plugins in Chapter 5, ​
Plugins ​. For now, think
of tests/conftest.pDVDSODFHZKHUHZHFDQSXWIL[WXUHVXVHGE all tests under the tests director.
Next, let’s rework some our tests for tasks_proj to properlXVHIL[WXUHV.\05063\051

Using Fixtures for Setup and Teardown
Most of the tests in the Tasks project will assume that the Tasks database is alreadVHWXSDQd
running and read$QGZHVKRXOGFOHDQWKLQJVXSDWWKHHQGLIWKHUHLVDQ cleanup needed. And
maEHDOVRGLVFRQQHFWIURPWKHGDWDEDVH/XFNLO, most of this is taken care of within the tasks
code with tasks.start_tasks_db( tasks.stop_tasks_db(); we just need to call them at the right time, and we need a temporary
director.
FortunatelStest includes a cool fixture called tmpdir that we can use for testing and don’t
have to worrDERXWFOHDQLQJXS,WVQRWPDJLFMXVWJRRGFRGLQJE the pWHVWIRONV 'RQt
worrZHORRNDWWPSGLUDQGLWVVHVVLRQVFRSHGUHODWLYHWPSGLUBIDFWRU in more depth in ​
Using
tmpdir and tmpdir_factory ​.)
Given those pieces, this fixture works nicel:
ch3/a/tasks_proj/tests/conftest.py
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ @pWHVWIL[WXUH )
​ ​def​ tasks_db(tmpdir):
​ ​"""Connect to db before tests, disconnect after."""​
​ ​# Setup : start db​
​ tasks.start_tasks_db(str(tmpdir), ​'tiny'​)

​ ​LHOd​ ​# this is where the testing happens​

​ ​# Teardown : stop db​
​ tasks.stop_tasks_db()
The value of tmpdir isn’t a string—it’s an object that represents a director+RZHYHULt
implements __str__, so we can use str() to get a string to pass to start_tasks_db(). We’re still
using ’tin$IRU7LQDB, for now.
A fixture function runs before the tests that use it. However, if there is a LHOGLQWKHIXQFWLRQLt
stops there, passes control to the tests, and picks up on the next line after the tests are done.
Therefore, think of the code above the LHOGDVVHWXSDQGWKHFRGHDIWHUield as “teardown.”
The code after the LHOGWKHWHDUGRZQLVJXDUDQWHHGWRUXQUHJDUGOHVVRIZKDWKDSSHQVGXULQg
the tests. We’re not returning anGDWDZLWKWKHield in this fixture. But RXFDQ.
Let’s change one of our tasks.add() tests to use this fixture:
ch3/a/tasks_proj/tests/func/test_add.py\05064\051

​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ ​def​ test_add_returns_valid_id(tasks_db):
​ ​"""tasks.add() should return an integer."""​
​ ​# GIVEN an initialized tasks db​
​ ​# WHEN a new task is added​
​ ​# THEN returned task_id is of type int​
​ new_task = Task(​'do something'​)
​ task_id = tasks.add(new_task)
​ ​assert​ isinstance(task_id, int)
The main change here is that the extra fixture in the file has been removed, and we’ve added
tasks_db to the parameter list of the test. I like to structure tests in a GIVEN/WHEN/THEN
format using comments, especiallZKHQLWLVQWREYLRXVIURPWKHFRGHZKDWVJRLQJRQ,WKLQk
it’s helpful in this case. Hopefull*,9(1DQLQLWLDOL]HGWDVNVGEKHOSVWRFODULI whWDVNVBGb
is used as a fixture for the test.
Make Sure Tasks Is Installed
We’re still writing tests to be run against the Tasks project in this chapter, which
was first installed in Chapter 2. If RXVNLSSHGWKDWFKDSWHUEHVXUHWRLQVWDOOWDVNs
with cd code; pip install ./tasks_proj/.\05065\051

Tracing Fixture Execution with –setup-show
If RXUXQWKHWHVWIURPWKHODVWVHFWLRQou don’t get to see what fixtures are run:​ ​$ ​​cd​​ ​​/path/to/code/ ​
​ ​$ ​​pip​​ ​​install ​​ ​​./tasks_proj/ ​​ # if not installed yet ​
​ ​$ ​​cd​​ ​​/path/to/code/ch3/a/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add.py​​ ​​-k​​ ​​valid_id​
​ ===================== test session starts ======================
​ collected 3 items

​ test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG3$66(D

​ ====================== 2 tests deselected ======================
​ ============ 1 passed, 2 deselected in 0.02 seconds ============
When I’m developing fixtures, I like to see what’s running and when. FortunatelStest
provides a command-line flag, --setup-show, that does just that: ​ ​$ ​​pWHVt​​ ​​--setup-show​​ ​​test_add.py​​ ​​-k​​ ​​valid_id​
​ ===================== test session starts ======================
​ collected 3 items

​ test_add.py
​ SETUP S tmpdir_factory
​ SETUP F tmpdir (fixtures used: tmpdir_factor)
​ SETUP F tasks_db (fixtures used: tmpdir)
​ func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLd
​ (fixtures used: tasks_db, tmpdir, tmpdir_factor .
​ TEARDOWN F tasks_db
​ TEARDOWN F tmpdir
​ TEARDOWN S tmpdir_factory

​ ====================== 2 tests deselected ======================
​ ============ 1 passed, 2 deselected in 0.02 seconds ============
Our test is in the middle, and pWHVWGHVLJQDWHVD6(783DQG7($5'2:1SRUWLRQWRHDFh
fixture. Going from test_add_returns_valid_id up, RXVHHWKDWWPSGLUUDQEHIRUHWKHWHVW$Qd
before that, tmpdir_factor$SSDUHQWO, tmpdir uses it as a fixture.
The F and S in front of the fixture names indicate scope. F for function scope, and S for session
scope. I’ll talk about scope in ​
Specifying Fixture Scope ​.\05066\051

Using Fixtures for Test Data
Fixtures are a great place to store data to use for testing. You can return anWKLQJ+HUHVa
fixture returning a tuple of mixed tSH:
ch3/test_fixtures.py
​ @pWHVWIL[WXUH )
​ ​def​ a_tuple():
​ ​"""Return something more interesting."""​
​ ​return​ (1, ​'foo'​, None, {​'bar'​: 23})


​ ​def​ test_a_tuple(a_tuple):
​ ​"""Demo the a_tuple fixture."""​
​ ​assert​ a_tuple[3][​'bar'​] == 32
Since test_a_tuple() should fail (23 != 32), we can see what happens when a test with a fixture
fails: ​ ​$ ​​cd​​ ​​/path/to/code/ch3​
​ ​$ ​​pWHVt​​ ​​test_fixtures.pWHVWBDBWXSOe ​
​ ===================== test session starts ======================
​ collected 1 item

​ test_fixtures.pF


​ =========================== FAILURES ===========================
​ _________________________ test_a_tuple _________________________

​ a_tuple = (1, 'foo', None, {'bar': 23})

​ def test_a_tuple(a_tuple):
​ """Demo the a_tuple fixture."""
​ > assert a_tuple[3]['bar'] == 32
​ E assert 23 == 32

​ test_fixtures.p$VVHUWLRQ(UURr
​ =================== 1 failed in 0.07 seconds ===================
Along with the stack trace section, pWHVWUHSRUWVWKHYDOXHSDUDPHWHUVRIWKHIXQFWLRQWKDWUDLVHd
the exception or failed an assert. In the case of tests, the fixtures are parameters to the test, and\05067\051

are therefore reported with the stack trace.
What happens if the assert (or anH[FHSWLRQ KDSSHQVLQWKHIL[WXUH?​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_fixtures.pWHVWBRWKHUBGDWa​
​ ===================== test session starts ======================
​ collected 1 item

​ test_fixtures.pWHVWBRWKHUBGDWD(552R

​ ============================ ERRORS ============================
​ ______________ ERROR at setup of test_other_data _______________

​ @pWHVWIL[WXUH )
​ def some_other_data():
​ """Raise an exception from fixture."""
​ x = 43
​ > assert x == 42
​ E assert 43 == 42

​ test_fixtures.p$VVHUWLRQ(UURr
​ =================== 1 error in 0.04 seconds ====================
A couple of things happen. The stack trace shows correctlWKDWWKHDVVHUWKDSSHQHGLQWKHIL[WXUe
function. Also, test_other_data is reported not as FAIL, but as ERROR. This distinction is great.
If a test ever fails, RXNQRZWKHIDLOXUHKDSSHQHGLQWKHWHVWSURSHUDQGQRWLQDQ fixture it
depends on.
But what about the Tasks project? For the Tasks project, we could probablXVHVRPHGDWa
fixtures, perhaps different lists of tasks with various properties:
ch3/a/tasks_proj/tests/conftest.py
​ ​# Reminder of Task constructor interface ​
​ ​# Task(summary=None, owner=None, done=False, id=None)​
​ ​# summary is required​
​ ​# owner and done are optional ​
​ ​# id is set by database ​

​ @pWHVWIL[WXUH )
​ ​def​ tasks_just_a_few():
​ ​"""All summaries and owners are unique."""​
​ ​return​ (
​ Task(​'Write some code'​, ​'Brian'​, True),
​ Task(​"Code review Brian's code"​, ​'Katie'​, False),\05068\051

​ Task(​'Fix what Brian did'​, ​'Michelle'​, False))


​ @pWHVWIL[WXUH )
​ ​def​ tasks_mult_per_owner():
​ ​"""Several owners with several tasks each."""​
​ ​return​ (
​ Task(​'Make a cookie'​, ​'Raphael'​),
​ Task(​'Use an emoji'​, ​'Raphael'​),
​ Task(​'Move to Berlin'​, ​'Raphael'​),

​ Task(​'Create'​, ​'Michelle'​),
​ Task(​'Inspire'​, ​'Michelle'​),
​ Task(​'Encourage'​, ​'Michelle'​),

​ Task(​'Do a handstand'​, ​'Daniel'​),
​ Task(​'Write some books'​, ​'Daniel'​),
​ Task(​'Eat ice cream'​, ​'Daniel'​))
You can use these directlIURPWHVWVRUou can use them from other fixtures. Let’s use them to
build up some non-emptGDWDEDVHVWRXVHIRUWHVWLQJ.\05069\051

Using Multiple Fixtures
You’ve alreadVHHQWKDWWPSGLUXVHVWPSGLUBIDFWRU. And RXXVHGWPSGLULQRXUWDVNVBGb
fixture. Let’s keep the chain going and add some specialized fixtures for non-emptWDVNs
databases:
ch3/a/tasks_proj/tests/conftest.py
​ @pWHVWIL[WXUH )
​ ​def​ db_with_3_tasks(tasks_db, tasks_just_a_few):
​ ​"""Connected db with 3 tasks, all unique."""​
​ ​for​ t ​in​ tasks_just_a_few:
​ tasks.add(t)


​ @pWHVWIL[WXUH )
​ ​def​ db_with_multi_per_owner(tasks_db, tasks_mult_per_owner):
​ ​"""Connected db with 9 tasks, 3 owners, all with 3 tasks."""​
​ ​for ​ t ​in​ tasks_mult_per_owner:
​ tasks.add(t)
These fixtures all include two fixtures each in their parameter list: tasks_db and a data set. The
data set is used to add tasks to the database. Now tests can use these when RXZDQWWKHWHVWWo
start from a non-emptGDWDEDVHOLNHWKLV:
ch3/a/tasks_proj/tests/func/test_add.py
​ ​def​ test_add_increases_count(db_with_3_tasks):
​ ​"""Test tasks.add() affect on tasks.count()."""​
​ ​# GIVEN a db with 3 tasks​
​ ​# WHEN another task is added​
​ tasks.add(Task(​'throw a party'​))

​ ​# THEN the count increases by 1​
​ ​assert​ tasks.count() == 4
This also demonstrates one of the great reasons to use fixtures: to focus the test on what RXUe
actuallWHVWLQJQRWRQZKDWou had to do to get readIRUWKHWHVW,OLNHXVLQJFRPPHQWVIRr
GIVEN/WHEN/THEN and trLQJWRSXVKDVPXFK*,9(1LQWRIL[WXUHVIRUWZRUHDVRQV)LUVWLt
makes the test more readable and, therefore, more maintainable. Second, an assert or exception
in the fixture results in an ERROR, while an assert or exception in a test function results in a
FAIL. I don’t want test_add_increases_count() to FAIL if database initialization failed. That
would just be confusing. I want a FAIL for test_add_increases_count() to onlEHSRVVLEOHLf
add() reallIDLOHGWRDOWHUWKHFRXQW/HWVWUDFHLWDQGVHHDOOWKHIL[WXUHVUXQ: ​ ​$ ​​cd​​ ​​/path/to/code/ch3/a/tasks_proj/tests/func ​\05070\051

​ ​$ ​​pWHVt​​ ​​--setup-show​​ ​​test_add.pWHVWBDGGBLQFUHDVHVBFRXQt​
​ ===================== test session starts ======================
​ collected 1 item

​ test_add.py
​ SETUP S tmpdir_factory
​ SETUP F tmpdir (fixtures used: tmpdir_factor)
​ SETUP F tasks_db (fixtures used: tmpdir)
​ SETUP F tasks_just_a_few
​ SETUP F db_with_3_tasks (fixtures used: tasks_db, tasks_just_a_few)
​ func/test_add.pWHVWBDGGBLQFUHDVHVBFRXQt
​ (fixtures used: db_with_3_tasks, tasks_db, tasks_just_a_few,
​ tmpdir, tmpdir_factor .
​ TEARDOWN F db_with_3_tasks
​ TEARDOWN F tasks_just_a_few
​ TEARDOWN F tasks_db
​ TEARDOWN F tmpdir
​ TEARDOWN S tmpdir_factory

​ =================== 1 passed in 0.04 seconds ===================
There are those F’s and S’s for function and session scope again. Let’s learn about those next.\05071\051

SpecifLQJ)L[WXUH6FRSe
Fixtures include an optional parameter called scope, which controls how often a fixture gets set
up and torn down. The scope parameter to @pWHVWIL[WXUH FDQKDYHWKHYDOXHVRIIXQFWLRQ,
class, module, or session. The default scope is function. The tasks_db fixture and all of the
fixtures so far don’t specifDVFRSH7KHUHIRUHWKH are function scope fixtures.
Here’s a rundown of each scope value:
scope=’function’Run once per test function. The setup portion is run before each test using the fixture. The
teardown portion is run after each test using the fixture. This is the default scope used
when no scope parameter is specified.
scope=’class’ Run once per test class, regardless of how manWHVWPHWKRGVDUHLQWKHFODVV.
scope=’module’ Run once per module, regardless of how manWHVWIXQFWLRQVRUPHWKRGVRURWKHUIL[WXUHs
in the module use it.
scope=’session’ Run once per session. All test methods and functions using a fixture of session scope share
one setup and teardown call.
Here’s how the scope values look in action:
ch3/test_scope.py
​ ​"""Demo fixture scope."""​

​ ​import​ pWHVt


​ @pWHVWIL[WXUH VFRSH 'function'​)
​ ​def​ func_scope():
​ ​"""A function scope fixture."""​


​ @pWHVWIL[WXUH VFRSH 'module'​)
​ ​def​ mod_scope():
​ ​"""A module scope fixture."""​

​ \05072\051

​ @pWHVWIL[WXUH VFRSH 'session'​)
​ ​def​ sess_scope():
​ ​"""A session scope fixture."""​


​ @pWHVWIL[WXUH VFRSH 'class'​)
​ ​def​ class_scope():
​ ​"""A class scope fixture."""​


​ ​def​ test_1(sess_scope, mod_scope, func_scope):
​ ​"""Test using session, module, and function scope fixtures."""​


​ ​def​ test_2(sess_scope, mod_scope, func_scope):
​ ​"""Demo is more fun with multiple tests."""​

​ @pWHVWPDUNXVHIL[WXUHV 'class_scope'​)
​ ​class​ TestSomething():
​ ​"""Demo class scope fixtures."""​

​ ​def​ test_3(self):
​ ​"""Test using a class scope fixture."""​

​ ​def​ test_4(self):
​ ​"""Again, multiple tests are more fun."""​
Let’s use --setup-show to demonstrate that the number of times a fixture is called and when the
setup and teardown are run depend on the scope: ​ ​$ ​​cd​​ ​​/path/to/code/ch3​
​ ​$ ​​pWHVt​​ ​​--setup-show​​ ​​test_scope.py​
​ ======================== test session starts ========================
​ collected 4 items

​ test_scope.py
​ SETUP S sess_scope
​ SETUP M mod_scope
​ SETUP F func_scope
​ test_scope.pWHVWB1
​ (fixtures used: func_scope, mod_scope, sess_scope).
​ TEARDOWN F func_scope\05073\051

​ SETUP F func_scope
​ test_scope.pWHVWB2
​ (fixtures used: func_scope, mod_scope, sess_scope).
​ TEARDOWN F func_scope
​ SETUP C class_scope
​ test_scope.p7HVW6RPHWKLQJ WHVWB IL[WXUHVXVHGFODVVBVFRSH .
​ test_scope.p7HVW6RPHWKLQJ WHVWB IL[WXUHVXVHGFODVVBVFRSH .
​ TEARDOWN C class_scope
​ TEARDOWN M mod_scope
​ TEARDOWN S sess_scope

​ ===================== 4 passed in 0.01 seconds ======================
Now RXJHWWRVHHQRWMXVW)DQG6IRUIXQFWLRQDQGVHVVLRQEXWDOVR&DQG0IRUFODVVDQd
module.
Scope is defined with the fixture. I know this is obvious from the code, but it’s an important
point to make sure RXIXOO grok. The scope is set at the definition of a fixture, and not at the
place where it’s called. The test functions that use a fixture don’t control how often a fixture is
set up and torn down.
Fixtures can onlGHSHQGRQRWKHUIL[WXUHVRIWKHLUVDPHVFRSHRUZLGHU6RDIXQFWLRQVFRSe
fixture can depend on other function scope fixtures (the default, and used in the Tasks project so
far). A function scope fixture can also depend on class, module, and session scope fixtures, but
RXFDQWJRLQWKHUHYHUVHRUGHU.
Changing Scope for Tasks Project Fixtures
With this knowledge of scope, let’s now change the scope of some of the Task project fixtures.
So far, we haven’t had a problem with test times. But it seems like a waste to set up a temporary
directorDQGQHZFRQQHFWLRQWRDGDWDEDVHIRUHYHU test. As long as we can ensure an empty
database when needed, that should be sufficient.
To have something like tasks_db be session scope, RXQHHGWRXVHWPSGLUBIDFWRU, since tmpdir
is function scope and tmpdir_factorLVVHVVLRQVFRSH/XFNLO, this is just a one-line code change
(well, two if RXFRXQWWPSGLU!WPSGLUBIDFWRU in the parameter list):
ch3/b/tasks_proj/tests/conftest.py
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ @pWHVWIL[WXUH VFRSH 'session'​)
​ ​def​ tasks_db_session(tmpdir_factor :
​ ​"""Connect to db before tests, disconnect after."""​\05074\051

​ temp_dir = tmpdir_factorPNWHPS 'temp'​)
​ tasks.start_tasks_db(str(temp_dir), ​'tiny'​)
​ ​LHOd​
​ tasks.stop_tasks_db()


​ @pWHVWIL[WXUH )
​ ​def​ tasks_db(tasks_db_session):
​ ​"""An empty tasks db."""​
​ tasks.delete_all()
Here we changed tasks_db to depend on tasks_db_session, and we deleted all the entries to make
sure it’s empt%HFDXVHZHGLGQWFKDQJHLWVQDPHQRQHRIWKHIL[WXUHVRUWHVWVWKDWDOUHDGy
include it have to change.
The data fixtures just return a value, so there reallLVQRUHDVRQWRKDYHWKHPUXQDOOWKHWLPH.
Once per session is sufficient:
ch3/b/tasks_proj/tests/conftest.py
​ ​# Reminder of Task constructor interface ​
​ ​# Task(summary=None, owner=None, done=False, id=None)​
​ ​# summary is required​
​ ​# owner and done are optional ​
​ ​# id is set by database ​
​ @pWHVWIL[WXUH VFRSH 'session'​)
​ ​def​ tasks_just_a_few():
​ ​"""All summaries and owners are unique."""​
​ ​return​ (
​ Task(​'Write some code'​, ​'Brian'​, True),
​ Task(​"Code review Brian's code"​, ​'Katie'​, False),
​ Task(​'Fix what Brian did'​, ​'Michelle'​, False))


​ @pWHVWIL[WXUH VFRSH 'session'​)
​ ​def​ tasks_mult_per_owner():
​ ​"""Several owners with several tasks each."""​
​ ​return​ (
​ Task(​'Make a cookie'​, ​'Raphael'​),
​ Task(​'Use an emoji'​, ​'Raphael'​),
​ Task(​'Move to Berlin'​, ​'Raphael'​),

​ Task(​'Create'​, ​'Michelle'​),
​ Task(​'Inspire'​, ​'Michelle'​),\05075\051

​ Task(​'Encourage'​, ​'Michelle'​),

​ Task(​'Do a handstand'​, ​'Daniel'​),
​ Task(​'Write some books'​, ​'Daniel'​),
​ Task(​'Eat ice cream'​, ​'Daniel'​))
Now, let’s see if all of these changes work with our tests: ​ ​$ ​​cd​​ ​​/path/to/code/ch3/b/tasks_proj​
​ ​$ ​​pWHVt​
​ ===================== test session starts ======================
​ collected 55 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ............................
​ tests/func/test_add_varietS ............
​ tests/func/test_api_exceptions.p.
​ tests/func/test_unique_id.p.
​ tests/unit/test_task.p.

​ ================== 55 passed in 0.17 seconds ===================
Looks like it’s all good. Let’s trace the fixtures for one test file to see if the different scoping
worked as expected: ​ ​$ ​​pWHVt​​ ​​--setup-show​​ ​​tests/func/test_add.py​
​ ======================== test session starts ========================
​ collected 3 items

​ tests/func/test_add.py
​ SETUP S tmpdir_factory
​ SETUP S tasks_db_session (fixtures used: tmpdir_factor)
​ SETUP F tasks_db (fixtures used: tasks_db_session)
​ tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLd
​ (fixtures used: tasks_db, tasks_db_session, tmpdir_factor .
​ TEARDOWN F tasks_db
​ SETUP F tasks_db (fixtures used: tasks_db_session)
​ tests/func/test_add.pWHVWBDGGHGBWDVNBKDVBLGBVHt
​ (fixtures used: tasks_db, tasks_db_session, tmpdir_factor .
​ TEARDOWN F tasks_db
​ SETUP F tasks_db (fixtures used: tasks_db_session)
​ SETUP S tasks_just_a_few
​ SETUP F db_with_3_tasks (fixtures used: tasks_db, tasks_just_a_few)\05076\051

​ tests/func/test_add.pWHVWBDGGBLQFUHDVHVBFRXQt
​ (fixtures used: db_with_3_tasks, tasks_db, tasks_db_session,
​ tasks_just_a_few, tmpdir_factor .
​ TEARDOWN F db_with_3_tasks
​ TEARDOWN F tasks_db
​ TEARDOWN S tasks_just_a_few
​ TEARDOWN S tasks_db_session
​ TEARDOWN S tmpdir_factory

​ ===================== 3 passed in 0.03 seconds ======================
Yep. Looks right. tasks_db_session is called once per session, and the quicker tasks_db now just
cleans out the database before each test.\05077\051

SpecifLQJ)L[WXUHVZLWKXVHIL[WXUHs
So far, if RXZDQWHGDWHVWWRXVHDIL[WXUHou put it in the parameter list. You can also mark a
test or a class with @pWHVWPDUNXVHIL[WXUHV IL[WXUHIL[WXUH XVHIL[WXUHVWDNHVDVWULQJWKDt
is composed of a comma-separated list of fixtures to use. It doesn’t make sense to do this with
test functions—it’s just more tSLQJ%XWLWGRHVZRUNZHOOIRUWHVWFODVVHV:
ch3/test_scope.py
​ @pWHVWPDUNXVHIL[WXUHV 'class_scope'​)
​ ​class​ TestSomething():
​ ​"""Demo class scope fixtures."""​

​ ​def​ test_3(self):
​ ​"""Test using a class scope fixture."""​

​ ​def​ test_4(self):
​ ​"""Again, multiple tests are more fun."""​
Using usefixtures is almost the same as specifLQJWKHIL[WXUHQDPHLQWKHWHVWPHWKRGSDUDPHWHr
list. The one difference is that the test can use the return value of a fixture onlLILWVVSHFLILHGLn
the parameter list. A test using a fixture due to usefixtures cannot use the fixture’s return value.\05078\051

Using autouse for Fixtures That AlwaV*HW8VHd
So far in this chapter, all of the fixtures used bWHVWVZHUHQDPHGE the tests (or used
usefixtures for that one class example). However, RXFDQXVHDXWRXVH 7UXHWRJHWDIL[WXUHWo
run all of the time. This works well for code RXZDQWWRUXQDWFHUWDLQWLPHVEXWWHVWVGRQt
reallGHSHQGRQDQ sVWHPVWDWHRUGDWDIURPWKHIL[WXUH+HUHVDUDWKHUFRQWULYHGH[DPSOH:
ch3/test_autouse.py
​ ​"""Demonstrate autouse fixtures."""​

​ ​import​ pWHVt
​ ​import​ time


​ @pWHVWIL[WXUH DXWRXVH 7UXHVFRSH 'session'​)
​ ​def​ footer_session_scope():
​ ​"""Report the time at the end of a session."""​
​ ​LHOd​
​ now = time.time()
​ ​print​(​'--'​)
​ ​print​(​'finished : {}'​.format(time.strftime(​'​​%​​d ​​%​​b ​​%​​X'​, time.localtime(now))))
​ ​print​(​'-----------------'​)


​ @pWHVWIL[WXUH DXWRXVH 7UXH)
​ ​def​ footer_function_scope():
​ ​"""Report test durations after each function."""​
​ start = time.time()
​ ​LHOd​
​ stop = time.time()
​ delta = stop - start
​ ​print​(​'​​\n​​test duration : {:0.3} seconds'​.format(delta))


​ ​def​ test_1():
​ ​"""Simulate long-ish running test."""​
​ time.sleep(1)


​ ​def​ test_2():
​ ​"""Simulate slightly longer test."""​\05079\051

​ time.sleep(1.23)
We want to add test times after each test, and the date and current time at the end of the session.
Here’s what these look like:
​ ​$ ​​cd​​ ​​/path/to/code/ch3​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-s​​ ​​test_autouse.py​
​ ===================== test session starts ======================
​ collected 2 items

​ test_autouse.pWHVWB3$66(D
​ test duration : 1.0 seconds

​ test_autouse.pWHVWB3$66(D
​ test duration : 1.24 seconds
​ --
​ finished : 25 Jul 16:18:27
​ -----------------
​ =================== 2 passed in 2.25 seconds ===================
The autouse feature is good to have around. But it’s more of an exception than a rule. Opt for
named fixtures unless RXKDYHDUHDOO great reason not to.
Now that RXYHVHHQDXWRXVHLQDFWLRQou maEHZRQGHULQJZK we didn’t use it for
tasks_db in this chapter. In the Tasks project, I felt it was important to keep the abilitWRWHVt
what happens if we trWRXVHDQ$3,IXQFWLRQEHIRUHGELQLWLDOL]DWLRQ,WVKRXOGUDLVHDn
appropriate exception. But we can’t test this if we force good initialization on everWHVW.\05080\051

Renaming Fixtures
The name of a fixture, listed in the parameter list of tests and other fixtures using it, is usuallWKe
same as the function name of the fixture. However, pWHVWDOORZVou to rename fixtures with a
name parameter to @pWHVWIL[WXUH :
ch3/test_rename_fixture.py
​ ​"""Demonstrate fixture renaming."""​

​ ​import​ pWHVt


​ @pWHVWIL[WXUH QDPH 'lue'​)
​ ​def​ ultimate_answer_to_life_the_universe_and_everWKLQJ :
​ ​"""Return ultimate answer."""​
​ ​return​ 42


​ ​def​ test_everWKLQJ OXH :
​ ​"""Use the shorter name."""​
​ ​assert​ lue == 42
Here, lue is now the fixture name, instead of fixture_with_a_name_much_longer_than_lue. That
name even shows up if we run it with --setup-show: ​ ​$ ​​pWHVt​​ ​​--setup-show​​ ​​test_rename_fixture.py​
​ ======================== test session starts ========================
​ collected 1 items

​ test_rename_fixture.py
​ SETUP F lue
​ test_rename_fixture.pWHVWBHYHUthing_2 (fixtures used: lue).
​ TEARDOWN F lue

​ ===================== 1 passed in 0.01 seconds ======================
If RXQHHGWRILQGRXWZKHUHOXHLVGHILQHGou can add the pWHVWRSWLRQIL[WXUHVDQGJLYHLt
the filename for the test. It lists all the fixtures available for the test, including ones that have
been renamed: ​ ​$ ​​pWHVt​​ ​​--fixtures​​ ​​test_rename_fixture.py​
​ ======================== test session starts =======================
​ ​...​\05081\051


​ ---------- fixtures defined from test_rename_fixture -----------
​ lue
​ Return ultimate answer.

​ ================= no tests ran in 0.01 seconds =================
Most of the output is omitted—there’s a lot there. LuckilWKHIL[WXUHVZHGHILQHGDUHDWWKe
bottom, along with where theDUHGHILQHG:HFDQXVHWKLVWRORRNXSWKHGHILQLWLRQRIOXH/HWs
use that in the Tasks project: ​ ​$ ​​cd​​ ​​/path/to/code/ch3/b/tasks_proj​
​ ​$ ​​pWHVt​​ ​​--fixtures​​ ​​tests/func/test_add.py​
​ ======================== test session starts ========================
​ ​...​
​ tmpdir_factory
​ Return a TempdirFactorLQVWDQFHIRUWKHWHVWVHVVLRQ.
​ tmpdir
​ Return a temporarGLUHFWRU path object which is
​ unique to each test function invocation, created as
​ a sub directorRIWKHEDVHWHPSRUDU director.
​ The returned object is a `pSDWKORFDOCBSDWKREMHFW.

​ ------------------ fixtures defined from conftest -------------------
​ tasks_db_session
​ Connect to db before tests, disconnect after.
​ tasks_db
​ An emptWDVNVGE.
​ tasks_just_a_few
​ All summaries and owners are unique.
​ tasks_mult_per_owner
​ Several owners with several tasks each.
​ db_with_3_tasks
​ Connected db with 3 tasks, all unique.
​ db_with_multi_per_owner
​ Connected db with 9 tasks, 3 owners, all with 3 tasks.

​ =================== no tests ran in 0.01 seconds ====================
Cool. All of our conftest.pIL[WXUHVDUHWKHUH$QGDWWKHERWWRPRIWKHEXLOWLQOLVWLVWKHWPSGLr
and tmpdir_factorWKDWZHXVHGDOVR.\05082\051

Parametrizing Fixtures
In ​
Parametrized Testing ​, we parametrized tests. We can also parametrize fixtures. We still use
our list of tasks, list of task identifiers, and an equivalence function, just as before:
ch3/b/tasks_proj/tests/func/test_add_varietSy
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task

​ tasks_to_tr  7DVN 'sleep'​, done=True),
​ Task(​'wake'​, ​'brian'​),
​ Task(​'breathe'​, ​'BRIAN'​, True),
​ Task(​'exercise'​, ​'BrIaN'​, False))

​ task_ids = [​'Task({},{},{})'​.format(t.summarWRZQHUWGRQH)
​ ​for ​ t ​in​ tasks_to_tr]


​ ​def​ equivalent(t1, t2):
​ ​"""Check two tasks for equivalence."""​
​ ​return​ ((t1.summar WVXPPDU) ​and​
​ (t1.owner == t2.owner) ​and​
​ (t1.done == t2.done))
But now, instead of parametrizing the test, we will parametrize a fixture called a_task:
ch3/b/tasks_proj/tests/func/test_add_varietSy
​ @pWHVWIL[WXUH SDUDPV WDVNVBWRBWU)
​ ​def​ a_task(request):
​ ​"""Using no ids."""​
​ ​return​ request.param


​ ​def​ test_add_a(tasks_db, a_task):
​ ​"""Using a_task fixture (no ids)."""​
​ task_id = tasks.add(a_task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, a_task)
The request listed in the fixture parameter is another builtin fixture that represents the calling
state of the fixture. You’ll explore it more in the next chapter. It has a field param that is filled in\05083\051

with one element from the list assigned to params in @pWHVWIL[WXUH SDUDPV WDVNVBWRBWU).
The a_task fixture is prettVLPSOHLWMXVWUHWXUQVWKHUHTXHVWSDUDPDVLWVYDOXHWRWKHWHVWXVLQg
it. Since our task list has four tasks, the fixture will be called four times, and then the test will get
called four times:​ ​$ ​​cd​​ ​​/path/to/code/ch3/b/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_a​
​ ===================== test session starts ======================
​ collected 4 items

​ test_add_varietS::test_add_a[a_task0] PASSED
​ test_add_varietS::test_add_a[a_task1] PASSED
​ test_add_varietS::test_add_a[a_task2] PASSED
​ test_add_varietS::test_add_a[a_task3] PASSED

​ =================== 4 passed in 0.03 seconds ===================
We didn’t provide ids, so pWHVWMXVWPDGHXSVRPHQDPHVE appending a number to the name of
the fixture. However, we can use the same string list we used when we parametrized our tests:
ch3/b/tasks_proj/tests/func/test_add_varietSy
​ @pWHVWIL[WXUH SDUDPV WDVNVBWRBWU, ids=task_ids)
​ ​def​ b_task(request):
​ ​"""Using a list of ids."""​
​ ​return​ request.param


​ ​def​ test_add_b(tasks_db, b_task):
​ ​"""Using b_task fixture, with ids."""​
​ task_id = tasks.add(b_task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, b_task)
This gives us better identifiers: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_b​
​ ===================== test session starts ======================
​ collected 4 items

​ test_add_varietS::test_add_b[Task(sleep,None,True)] PASSED
​ test_add_varietS::test_add_b[Task(wake,brian,False)] PASSED
​ test_add_varietS::test_add_b[Task(breathe,BRIAN,True)] PASSED
​ test_add_varietS::test_add_b[Task(exercise,BrIaN,False)] PASSED\05084\051


​ =================== 4 passed in 0.04 seconds ===================
We can also set the ids parameter to a function we write that provides the identifiers. Here’s what
it looks like when we use a function to generate the identifiers:
ch3/b/tasks_proj/tests/func/test_add_varietSy
​ ​def​ id_func(fixture_value):
​ ​"""A function for generating ids."""​
​ t = fixture_value
​ ​return​ ​'Task({},{},{})'​.format(t.summarWRZQHUWGRQH)


​ @pWHVWIL[WXUH SDUDPV WDVNVBWRBWU, ids=id_func)
​ ​def​ c_task(request):
​ ​"""Using a function (id_func) to generate ids."""​
​ ​return​ request.param


​ ​def​ test_add_c(tasks_db, c_task):
​ ​"""Use fixture with generated ids."""​
​ task_id = tasks.add(c_task)
​ t_from_db = tasks.get(task_id)
​ ​assert​ equivalent(t_from_db, c_task)
The function will be called from the value of each item from the parametrization. Since the
parametrization is a list of Task objects, id_func() will be called with a Task object, which allows
us to use the namedtuple accessor methods to access a single Task object to generate the
identifier for one Task object at a time. It’s a bit cleaner than generating a full list ahead of time,
and looks the same: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_add_varietS::test_add_c ​
​ ===================== test session starts ======================
​ collected 4 items

​ test_add_varietS::test_add_c[Task(sleep,None,True)] PASSED
​ test_add_varietS::test_add_c[Task(wake,brian,False)] PASSED
​ test_add_varietS::test_add_c[Task(breathe,BRIAN,True)] PASSED
​ test_add_varietS::test_add_c[Task(exercise,BrIaN,False)] PASSED

​ =================== 4 passed in 0.04 seconds ===================
With parametrized functions, RXJHWWRUXQWKDWIXQFWLRQPXOWLSOHWLPHV%XWZLWKSDUDPHWUL]Hd
fixtures, everWHVWIXQFWLRQWKDWXVHVWKDWIL[WXUHZLOOEHFDOOHGPXOWLSOHWLPHV9HU powerful.\05085\051

Parametrizing Fixtures in the Tasks Project
Now, let’s see how we can use parametrized fixtures in the Tasks project. So far, we used
Tin'%IRUDOORIWKHWHVWLQJ%XWZHZDQWWRNHHSRXURSWLRQVRSHQXQWLOODWHULQWKHSURMHFW.
Therefore, anFRGHZHZULWHDQGDQ tests we write, should work with both Tin'%DQGZLWh
MongoDB.
The decision (in the code) of which database to use is isolated to the start_tasks_db() call in the
tasks_db_session fixture:
ch3/b/tasks_proj/tests/conftest.py
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ @pWHVWIL[WXUH VFRSH 'session'​)
​ ​def​ tasks_db_session(tmpdir_factor :
​ ​"""Connect to db before tests, disconnect after."""​
​ temp_dir = tmpdir_factorPNWHPS 'temp'​)
​ tasks.start_tasks_db(str(temp_dir), ​'tiny'​)
​ ​LHOd​
​ tasks.stop_tasks_db()


​ @pWHVWIL[WXUH )
​ ​def​ tasks_db(tasks_db_session):
​ ​"""An empty tasks db."""​
​ tasks.delete_all()
The db_tSHSDUDPHWHULQWKHFDOOWRVWDUWBWDVNVBGE LVQWPDJLF,WMXVWHQGVXSVZLWFKLQJZKLFh
subsVWHPJHWVWREHUHVSRQVLEOHIRUWKHUHVWRIWKHGDWDEDVHLQWHUDFWLRQV:
tasks_proj/src/tasks/api.py
​ ​def​ start_tasks_db(db_path, db_tSH # type: (str, str) -> None ​
​ ​"""Connect API functions to a db."""​
​ ​if​ ​not​ isinstance(db_path, string_tSHV :
​ ​raise ​ TSH(UURU 'db_path must be a string'​)
​ ​global ​ _tasksdb
​ ​if​ db_tSH 'tiny'​:
​ ​import​ tasks.tasksdb_tinGb
​ _tasksdb = tasks.tasksdb_tinGEVWDUWBWDVNVBGE GEBSDWK)
​ ​elif​ db_tSH 'mongo'​:
​ ​import​ tasks.tasksdb_pPRQJo\05086\051

​ _tasksdb = tasks.tasksdb_pPRQJRVWDUWBWDVNVBGE GEBSDWK)
​ ​else​:
​ ​raise ​ ValueError(​"db_type must be a 'tiny' or 'mongo'"​)
To test MongoDB, we need to run all the tests with db_tSHVHWWRPRQJR$VPDOOFKDQJHGRHs
the trick:
ch3/c/tasks_proj/tests/conftest.py
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ ​#@pytest.fixture(scope='session', params=['tiny',])​
​ @pWHVWIL[WXUH VFRSH 'session'​, params=[​'tiny'​, ​'mongo'​])
​ ​def​ tasks_db_session(tmpdir_factorUHTXHVW :
​ ​"""Connect to db before tests, disconnect after."""​
​ temp_dir = tmpdir_factorPNWHPS 'temp'​)
​ tasks.start_tasks_db(str(temp_dir), request.param)
​ ​LHOd​ ​# this is where the testing happens​
​ tasks.stop_tasks_db()


​ @pWHVWIL[WXUH )
​ ​def​ tasks_db(tasks_db_session):
​ ​"""An empty tasks db."""​
​ tasks.delete_all()
Here I added params=[’tin$PRQJR@WRWKHIL[WXUHGHFRUDWRU,DGGHGUHTXHVWWRWKHSDUDPHWHr
list of temp_db, and I set db_tSHWRUHTXHVWSDUDPLQVWHDGRIMXVWSLFNLQJWLQ’ or ’mongo’.
When RXVHWWKHYHUERVHRUYIODJZLWKStest running parametrized tests or parametrized
fixtures, pWHVWODEHOVWKHGLIIHUHQWUXQVEDVHGRQWKHYDOXHRIWKHSDUDPHWUL]DWLRQ$QGEHFDXVe
the values are alreadVWULQJVWKDWZRUNVJUHDW.
Installing MongoDB
To follow along with MongoDB testing, make sure MongoDB and pPRQJRDUe
installed. I’ve been testing with the communitHGLWLRQRI0RQJR'%IRXQGDt
https://www.mongodb.com/download-center
. pPRQJRLVLQVWDOOHGZLWKSLSSLp
install pPRQJR+RZHYHUXVLQJ0RQJR'%LVQRWQHFHVVDU to follow along with
the rest of the book; it’s used in this example and in a debugger example in Chapter
7.\05087\051

Here’s what we have so far:​ ​$ ​​cd​​ ​​/path/to/code/ch3/c/tasks_proj​
​ ​$ ​​pip​​ ​​install ​​ ​​pPRQJo​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​--tb=no​
​ ===================== test session starts ======================
​ collected 92 items

​ test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG>WLQ] PASSED
​ test_add.pWHVWBDGGHGBWDVNBKDVBLGBVHW>WLQ] PASSED
​ test_add.pWHVWBDGGBLQFUHDVHVBFRXQW>WLQ] PASSED
​ test_add_varietS::test_add_1[tin@3$66(D
​ test_add_varietS::test_add_2[tinWDVN@3$66(D
​ test_add_varietS::test_add_2[tinWDVN@3$66(D
​ ​...​
​ test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG>PRQJR@)$,/(D
​ test_add.pWHVWBDGGHGBWDVNBKDVBLGBVHW>PRQJR@)$,/(D
​ test_add.pWHVWBDGGBLQFUHDVHVBFRXQW>PRQJR@3$66(D
​ test_add_varietS::test_add_1[mongo] FAILED
​ test_add_varietS::test_add_2[mongo-task0] FAILED
​ ​...​
​ ============= 42 failed, 50 passed in 4.94 seconds =============
Hmm. Bummer. Looks like we’ll need to do some debugging before we let anRQHXVHWKe
Mongo version. You’ll take a look at how to debug this in ​
pdb: Debugging Test Failures ​. Until
then, we’ll use the Tin'%YHUVLRQ.\05088\051

Exercises1. Create a test file called test_fixtures.p.
2. Write a few data fixtures—functions with the @pWHVWIL[WXUH GHFRUDWRUWKDWUHWXUnsome data. Perhaps a list, or a dictionarRUDWXSOH.
3. For each fixture, write at least one test function that uses it.
4. Write two tests that use the same fixture.
5. Run pWHVWVHWXSVKRZWHVWBIL[WXUHVS. Are all the fixtures run before everWHVW?
6. Add scope=’module’ to the fixture from Exercise 4.
7. Re-run pWHVWVHWXSVKRZWHVWBIL[WXUHVS. What changed?
8. For the fixture from Exercise 6, change return to LHOGGDWD!.
9. Add print statements before and after the LHOG.
10. Run pWHVWVYWHVWBIL[WXUHVS. Does the output make sense?\05089\051

What’s Next
The pWHVWIL[WXUHLPSOHPHQWDWLRQLVIOH[LEOHHQRXJKWRXVHIL[WXUHVOLNHEXLOGLQJEORFNVWREXLOd
up test setup and teardown, and to swap in and out different chunks of the sVWHP OLNHVZDSSLQg
in Mongo for Tin'% %HFDXVHIL[WXUHVDUHVRIOH[LEOH,XVHWKHPKHDYLO to push as much of
the setup of mWHVWVLQWRIL[WXUHVDV,FDQ.
In this chapter, RXORRNHGDWStest fixtures RXZULWHourself, as well as a couple of builtin
fixtures, tmpdir and tmpdir_factor<RXOOWDNHDFORVHUORRNDWWKHEXLOWLQIL[WXUHVLQWKHQH[t
chapter.
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.\05090\051

Chapter 4
Builtin Fixtures
In the previous chapter, RXORRNHGDWZKDWIL[WXUHVDUHKRZWRZULWHWKHPDQGKRZWRXVHWKHm
for test data as well as setup and teardown code. You also used conftest.pIRUVKDULQJIL[WXUHs
between tests in multiple test files. BWKHHQGRI&KDSWHU ​
pytest Fixtures ​, the Tasks project
had these fixtures: tasks_db_session, tasks_just_a_few, tasks_mult_per_owner, tasks_db,
db_with_3_tasks, and db_with_multi_per_owner defined in conftest.pWREHXVHGE anWHVt
function in the Tasks project that needed them.
Reusing common fixtures is such a good idea that the pWHVWGHYHORSHUVLQFOXGHGVRPe
commonlQHHGHGIL[WXUHVZLWKStest. You’ve alreadVHHQWPSGLUDQGWPSGLUBIDFWRU in use by
the Tasks project in ​
Changing Scope for Tasks Project Fixtures ​. You’ll take a look at them in
more detail in this chapter.
The builtin fixtures that come prepackaged with pWHVWFDQKHOSou do some prettXVHIXOWKLQJs
in RXUWHVWVHDVLO and consistentl)RUH[DPSOHLQDGGLWLRQWRKDQGOLQJWHPSRUDU files, pWHVt
includes builtin fixtures to access command-line options, communicate between tests sessions,
validate output streams, modifHQYLURQPHQWDOYDULDEOHVDQGLQWHUURJDWHZDUQLQJV7KHEXLOWLn
fixtures are extensions to the core functionalitRIStest. Let’s now take a look at several of the
most often used builtin fixtures one bRQH.\05091\051

Using tmpdir and tmpdir_factory
The tmpdir and tmpdir_factorEXLOWLQIL[WXUHVDUHXVHGWRFUHDWHDWHPSRUDU file sVWHm
directorEHIRUHour test runs, and remove the directorZKHQour test is finished. In the Tasks
project, we needed a directorWRVWRUHWKHWHPSRUDU database files used b0RQJR'%DQd
Tin'%+RZHYHUEHFDXVHZHZDQWWRWHVWZLWKWHPSRUDU databases that don’t survive past a
test session, we used tmpdir and tmpdir_factorWRGRWKHGLUHFWRU creation and cleanup for us.
If RXUHWHVWLQJVRPHWKLQJWKDWUHDGVZULWHVRUPRGLILHVILOHVou can use tmpdir to create files
or directories used bDVLQJOHWHVWDQGou can use tmpdir_factorZKHQou want to set up a
directorIRUPDQ tests.
The tmpdir fixture has function scope, and the tmpdir_factorIL[WXUHKDVVHVVLRQVFRSH$Qy
individual test that needs a temporarGLUHFWRU or file just for the single test can use tmpdir. This
is also true for a fixture that is setting up a directorRUILOHWKDWVKRXOGEHUHFUHDWHGIRUHDFKWHVt
function.
Here’s a simple example using tmpdir:
ch4/test_tmpdir.py
​ ​def​ test_tmpdir(tmpdir):
​ ​# tmpdir already has a path name associated with it​
​ ​# join() extends the path to include a filename ​
​ ​# the file is created when it's written to​
​ a_file = tmpdir.join(​'something.txt'​)

​ ​# you can create directories​
​ a_sub_dir = tmpdir.mkdir(​'anything'​)

​ ​# you can create files in directories (created when written)​
​ another_file = a_sub_dir.join(​'something_else.txt'​)

​ ​# this write creates 'something.txt'​
​ a_file.write(​'contents may settle during shipping'​)

​ ​# this write creates 'anything/something_else.txt'​
​ another_file.write(​'something different'​)

​ ​# you can read the files as well ​
​ ​assert​ a_file.read() == ​'contents may settle during shipping'​
​ ​assert​ another_file.read() == ​'something different'​
The value returned from tmpdir is an object of tSHS.path.local. [8]
This seems like everWKLQg
we need for temporarGLUHFWRULHVDQGILOHV+RZHYHUWKHUHVRQHJRWFKD%HFDXVHWKHWPSGLr\05092\051

fixture is defined as function scope, RXFDQWXVHWPSGLUWRFUHDWHIROGHUVRUILOHVWKDWVKRXOd
staLQSODFHORQJHUWKDQRQHWHVWIXQFWLRQ)RUIL[WXUHVZLWKVFRSHRWKHUWKDQIXQFWLRQ FODVV,
module, session), tmpdir_factorLVDYDLODEOH.
The tmpdir_factorIL[WXUHLVDORWOLNHWPSGLUEXWLWKDVDGLIIHUHQWLQWHUIDFH$VGLVFXVVHGLn
Specifying Fixture Scope
​, function scope fixtures run once per test function, module scope
fixtures run once per module, class scope fixtures run once per class, and test scope fixtures run
once per session. Therefore, resources created in session scope fixtures have a lifetime of the
entire session.
To see how similar tmpdir and tmpdir_factorDUH,OOPRGLI the tmpdir example just enough to
use tmpdir_factorLQVWHDG:
ch4/test_tmpdir.py
​ ​def​ test_tmpdir_factor WPSGLUBIDFWRU):
​ ​# you should start with making a directory ​
​ ​# a_dir acts like the object returned from the tmpdir fixture ​
​ a_dir = tmpdir_factorPNWHPS 'mydir'​)

​ ​# base_temp will be the parent dir of 'mydir'​
​ ​# you don't have to use getbasetemp()​
​ ​# using it here just to show that it's available ​
​ base_temp = tmpdir_factorJHWEDVHWHPS )
​ ​print​(​'base:'​, base_temp)

​ ​# the rest of this test looks the same as the 'test_tmpdir()'​
​ ​# example except I'm using a_dir instead of tmpdir​

​ a_file = a_dir.join(​'something.txt'​)
​ a_sub_dir = a_dir.mkdir(​'anything'​)
​ another_file = a_sub_dir.join(​'something_else.txt'​)

​ a_file.write(​'contents may settle during shipping'​)
​ another_file.write(​'something different'​)

​ ​assert​ a_file.read() == ​'contents may settle during shipping'​
​ ​assert​ another_file.read() == ​'something different'​
The first line uses mktemp(’mGLU WRFUHDWHDGLUHFWRU and saves it in a_dir. For the rest of the
function, RXFDQXVHDBGLUMXVWOLNHWKHWPSGLUUHWXUQHGIURPWKHWPSGLUIL[WXUH.
In the second line of the tmpdir_factorH[DPSOHWKHJHWEDVHWHPS IXQFWLRQUHWXUQVWKHEDVe
directorXVHGIRUWKLVVHVVLRQ7KHSULQWVWDWHPHQWLVLQWKHH[DPSOHVRou can see where the
directorLVRQour sVWHP/HWVVHHZKHUHLWLV: ​ ​$ ​​cd​​ ​​/path/to/code/ch4​\05093\051

​ ​$ ​​pWHVt​​ ​​-q​​ ​​-s​​ ​​test_tmpdir.pWHVWBWPSGLUBIDFWRUy​
​ base: /private/var/folders/53/zv4j_zc506x2xq25l31qxvxm0000gn\
​ /T/pWHVWRIRNNHQStest-732
​ .
​ 1 passed in 0.04 seconds
This base directorLVVstem- and user-dependent, and pWHVW180FKDQJHVZLWKDn
incremented NUM for everVHVVLRQ7KHEDVHGLUHFWRU is left alone after a session, but pWHVt
cleans them up and onlWKHPRVWUHFHQWIHZWHPSRUDU base directories are left on the sVWHP,
which is great if RXQHHGWRLQVSHFWWKHILOHVDIWHUDWHVWUXQ.
You can also specifour own base directorLIou need to with pWHVWEDVHWHPS Pdir.
Using Temporar'LUHFWRULHVIRU2WKHU6FRSHs
We get session scope temporarGLUHFWRULHVDQGILOHVIURPWKHWPSGLUBIDFWRU fixture, and
function scope directories and files from the tmpdir fixture. But what about other scopes? What
if we need a module or a class scope temporarGLUHFWRU? To do this, we create another fixture
of the scope we want and have it use tmpdir_factor.
For example, suppose we have a module full of tests, and manRIWKHPQHHGWREHDEOHWRUHDd
some data from a json file. We could put a module scope fixture in either the module itself, or in
a conftest.pILOHWKDWVHWVXSWKHGDWDILOHOLNHWKLV:
ch4/authors/conftest.py
​ ​"""Demonstrate tmpdir_factory."""​

​ ​import​ json
​ ​import​ pWHVt


​ @pWHVWIL[WXUH VFRSH 'module'​)
​ ​def​ author_file_json(tmpdir_factor :
​ ​"""Write some authors to a data file."""​
​ pWKRQBDXWKRUBGDWD {
​ ​'Ned'​: {​'City'​: ​'Boston'​},
​ ​'Brian'​: {​'City'​: ​'Portland'​},
​ ​'Luciano'​: {​'City'​: ​'Sau Paulo'​}
​ }

​ file = tmpdir_factorPNWHPS 'data'​).join(​'author_file.json'​)
​ ​print​(​'file:{}'​.format(str(file)))

​ ​with​ file.open(​'w'​) ​as​ f:
​ json.dump(pWKRQBDXWKRUBGDWDI)\05094\051

​ ​return​ file
The author_file_json() fixture creates a temporarGLUHFWRU called data and creates a file called
author_file.json within the data director,WWKHQZULWHVWKHSthon_author_data dictionarDs
json. Because this is a module scope fixture, the json file will onlEHFUHDWHGRQFHSHUPRGXOe
that has a test using it:
ch4/authors/test_authors.py
​ ​"""Some tests that use temp data files."""​
​ ​import​ json


​ ​def​ test_brian_in_portland(author_file_json):
​ ​"""A test that uses a data file."""​
​ ​with​ author_file_json.open() ​as​ f:
​ authors = json.load(f)
​ ​assert​ authors[​'Brian'​][​'City'​] == ​'Portland'​


​ ​def​ test_all_have_cities(author_file_json):
​ ​"""Same file is used for both tests."""​
​ ​with​ author_file_json.open() ​as​ f:
​ authors = json.load(f)
​ ​for ​ a ​in​ authors:
​ ​assert​ len(authors[a][​'City'​]) > 0
Both tests will use the same json file. If one test data file works for multiple tests, there’s no use
recreating it for both.\05095\051

Using pWHVWFRQILg
With the pWHVWFRQILJEXLOWLQIL[WXUHou can control how pWHVWUXQVWKURXJKFRPPDQGOLQe
arguments and options, configuration files, plugins, and the directorIURPZKLFKou launched
pWHVW7KHStestconfig fixture is a shortcut to request.config, and is sometimes referred to in the
pWHVWGRFXPHQWDWLRQDVWKHStest config object.”
To see how pWHVWFRQILJZRUNVou’ll look at how to add a custom command-line option and
read the option value from within a test. You can read the value of command-line options
directlIURPStestconfig, but to add the option and have pWHVWSDUVHLWou need to add a hook
function. Hook functions, which I cover in more detail in Chapter 5, ​
Plugins ​, are another waWo
control how pWHVWEHKDYHVDQGDUHXVHGIUHTXHQWO in plugins. However, adding a custom
command-line option and reading it from pWHVWFRQILJLVFRPPRQHQRXJKWKDW,ZDQWWRFRYHULt
here.
We’ll use the pWHVWKRRNStest_addoption to add a couple of options to the options already
available in the pWHVWFRPPDQGOLQH:
ch4/pWHVWFRQILJFRQIWHVWSy
​ ​def​ pWHVWBDGGRSWLRQ SDUVHU :
​ parser.addoption(​"--myopt"​, action=​"store_true"​,
​ help=​"some boolean option"​)
​ parser.addoption(​"--foo"​, action=​"store"​, default=​"bar"​,
​ help=​"foo: bar or baz"​)
Adding command-line options via pWHVWBDGGRSWLRQVKRXOGEHGRQHYLDSOXJLQVRULQWKe
conftest.pILOHDWWKHWRSRIour project directorVWUXFWXUH<RXVKRXOGQWGRLWLQDWHVt
subdirector.
The options --mRSWDQGIRRYDOXH!ZHUHDGGHGWRWKHSUHYLRXVFRGHDQGWKHKHOSVWULQJZDs
modified, as shown here: ​ ​$ ​​cd​​ ​​/path/to/code/ch4/pWHVWFRQILg​
​ ​$ ​​pWHVt​​ ​​--help​
​ usage: pWHVW>RSWLRQV@>ILOHBRUBGLU@>ILOHBRUBGLU@>]
​ ​...​
​ custom options:
​ --mRSWVRPHERROHDQRSWLRn
​ --foo=FOO foo: bar or baz
​ ​...​
Now we can access those options from a test:
ch4/pWHVWFRQILJWHVWBFRQILJSy
​ ​import​ pWHVt
​ \05096\051


​ ​def​ test_option(pWHVWFRQILJ :
​ ​print​(​'"foo" set to:'​, pWHVWFRQILJJHWRSWLRQ 'foo'​))
​ ​print​(​'"myopt" set to:'​, pWHVWFRQILJJHWRSWLRQ 'myopt'​))
Let’s see how this works: ​ ​$ ​​pWHVt​​ ​​-s​​ ​​-q​​ ​​test_config.pWHVWBRSWLRn​
​ "foo" set to: bar
​ "mRSWVHWWR)DOVe
​ .
​ 1 passed in 0.01 seconds
​ ​$ ​​pWHVt​​ ​​-s​​ ​​-q​​ ​​--mRSt​​ ​​test_config.pWHVWBRSWLRn​
​ "foo" set to: bar
​ "mRSWVHWWR7UXe
​ .
​ 1 passed in 0.01 seconds
​ ​$ ​​pWHVt​​ ​​-s​​ ​​-q​​ ​​--mRSt​​ ​​--foo​​ ​​baz ​​ ​​test_config.pWHVWBRSWLRn​
​ "foo" set to: baz
​ "mRSWVHWWR7UXe
​ .
​ 1 passed in 0.01 seconds
Because pWHVWFRQILJLVDIL[WXUHLWFDQDOVREHDFFHVVHGIURPRWKHUIL[WXUHV<RXFDQPDNe
fixtures for the option names, if RXOLNHOLNHWKLV:
ch4/pWHVWFRQILJWHVWBFRQILJSy
​ @pWHVWIL[WXUH )
​ ​def​ foo(pWHVWFRQILJ :
​ ​return​ pWHVWFRQILJRSWLRQIRo


​ @pWHVWIL[WXUH )
​ ​def​ mRSW Stestconfig):
​ ​return​ pWHVWFRQILJRSWLRQPopt


​ ​def​ test_fixtures_for_options(foo, mRSW :
​ ​print​(​'"foo" set to:'​, foo)
​ ​print​(​'"myopt" set to:'​, mRSW)
You can also access builtin options, not just options RXDGGDVZHOODVLQIRUPDWLRQDERXWKRw
pWHVWZDVVWDUWHG WKHGLUHFWRU, the arguments, and so on).\05097\051

Here’s an example of a few configuration values and options:
ch4/pWHVWFRQILJWHVWBFRQILJSy
​ ​def​ test_pWHVWFRQILJ Stestconfig):
​ ​print​(​'args :'​, pWHVWFRQILJDUJV)
​ ​print​(​'inifile :'​, pWHVWFRQILJLQLILOH)
​ ​print​(​'invocation_dir :'​, pWHVWFRQILJLQYRFDWLRQBGLU)
​ ​print​(​'rootdir :'​, pWHVWFRQILJURRWGLU)
​ ​print​(​'-k EXPRESSION :'​, pWHVWFRQILJJHWRSWLRQ 'keyword'​))
​ ​print​(​'-v, --verbose :'​, pWHVWFRQILJJHWRSWLRQ 'verbose'​))
​ ​print​(​'-q, --quiet :'​, pWHVWFRQILJJHWRSWLRQ 'quiet'​))
​ ​print​(​'-l, --showlocals:'​, pWHVWFRQILJJHWRSWLRQ 'showlocals'​))
​ ​print​(​'--tb=style :'​, pWHVWFRQILJJHWRSWLRQ 'tbstyle'​))
You’ll use pWHVWFRQILJDJDLQZKHQ,GHPRQVWUDWHLQLILOHVLQ&KDSWHU ​
Configuration ​.\05098\051

Using cache
UsuallZHWHVWHUVOLNHWRWKLQNDERXWHDFKWHVWDVEHLQJDVLQGHSHQGHQWDVSRVVLEOHIURPRWKHr
tests. We want to make sure order dependencies don’t creep in. We want to be able to run or
rerun anWHVWLQDQ order and get the same result. We also want test sessions to be repeatable
and to not change behavior based on previous test sessions.
However, sometimes passing information from one test session to the next can be quite useful.
When we do want to pass information to future test sessions, we can do it with the cache builtin
fixture.
The cache fixture is all about storing information about one test session and retrieving it in the
next. A great example of using the powers of cache for good is the builtin functionalitRIODVW-
failed and --failed-first. Let’s take a look at how the data for these flags is stored using cache.
Here’s the help text for the --last-failed and --failed-first options, as well as a couple of cache
options:​ ​$ ​​pWHVt​​ ​​--help​
​ ​...​
​ --lf, --last-failed rerun onlWKHWHVWVWKDWIDLOHGDWWKHODVWUXQ Rr
​ all if none failed)
​ --ff, --failed-first run all tests but run the last failures first. This
​ maUHRUGHUWHVWVDQGWKXVOHDGWRUHSHDWHGIL[WXUe
​ setup/teardown
​ --cache-show show cache contents, don't perform collection or tests
​ --cache-clear remove all cache contents at start of test run.
​ ​...​
To see these in action, we’ll use these two tests:
ch4/cache/test_pass_fail.py
​ ​def​ test_this_passes():
​ ​assert​ 1 == 1


​ ​def​ test_this_fails():
​ ​assert​ 1 == 2
Let’s run them using --verbose to see the function names, and --tb=no to hide the stack trace: ​ ​$ ​​cd​​ ​​/path/to/code/ch4/cache ​
​ ​$ ​​pWHVt​​ ​​--verbose ​​ ​​--tb=no​​ ​​test_pass_fail.py​
​ ==================== test session starts ====================
​ collected 2 items\05099\051


​ test_pass_fail.pWHVWBWKLVBSDVVHV3$66(D
​ test_pass_fail.pWHVWBWKLVBIDLOV)$,/(D

​ ============ 1 failed, 1 passed in 0.05 seconds =============
If RXUXQWKHPDJDLQZLWKWKHIIRUIDLOHGILUVWIODJWKHWHVWVWKDWIDLOHGSUHYLRXVO will be run
first, followed bWKHUHVWRIWKHVHVVLRQ: ​ ​$ ​​pWHVt​​ ​​--verbose ​​ ​​--tb=no​​ ​​--ff​​ ​​test_pass_fail.py​
​ ==================== test session starts ====================
​ run-last-failure: rerun last 1 failures first
​ collected 2 items

​ test_pass_fail.pWHVWBWKLVBIDLOV)$,/(D
​ test_pass_fail.pWHVWBWKLVBSDVVHV3$66(D

​ ============ 1 failed, 1 passed in 0.04 seconds =============
Or RXFDQXVHOIRUODVWIDLOHGWRMXVWUXQWKHWHVWVWKDWIDLOHGWKHODVWWLPH: ​ ​$ ​​pWHVt​​ ​​--verbose ​​ ​​--tb=no​​ ​​--lf​​ ​​test_pass_fail.py​
​ ==================== test session starts ====================
​ run-last-failure: rerun last 1 failures
​ collected 2 items

​ test_pass_fail.pWHVWBWKLVBIDLOV)$,/(D

​ ==================== 1 tests deselected =====================
​ ========== 1 failed, 1 deselected in 0.05 seconds ===========
Before we look at how the failure data is being saved and how RXFDQXVHWKHVDPHPHFKDQLVP,
let’s look at another example that makes the value of --lf and --ff even more obvious.
Here’s a parametrized test with one failure:
ch4/cache/test_few_failures.py
​ ​"""Demonstrate -lf and -ff with failing tests."""​

​ ​import​ pWHVt
​ ​from​ pWHVWimport​ approx


​ testdata = [
​ ​# x, y, expected​
\050100\051

​ (1.01, 2.01, 3.02),
​ (1e25, 1e23, 1.1e25),
​ (1.23, 3.21, 4.44),
​ (0.1, 0.2, 0.3),
​ (1e25, 1e24, 1.1e25)
​ ]


​ @pWHVWPDUNSDUDPHWUL]H "x,y,expected"​, testdata)
​ ​def​ test_a(x, H[SHFWHG :
​ ​"""Demo approx()."""​
​ sum_ = x + y
​ ​assert​ sum_ == approx(expected)
And the output: ​ ​$ ​​cd​​ ​​/path/to/code/ch4/cache ​
​ ​$ ​​pWHVt​​ ​​-q​​ ​​test_few_failures.py​
​ .F...
​ ====================== FAILURES ======================
​ ____________ test_a[1e+25-1e+23-1.1e+25] _____________

​ x = 1e+25,  HH[SHFWHG H5

​ @pWHVWPDUNSDUDPHWUL]H [,expected", testdata)
​ def test_a(x,H[SHFWHG :
​ sum_ = x + y
​ > assert sum_ == approx(expected)
​ E assert 1.01e+25 == 1.1e+25 ± 1.1e+19
​ E + where 1.1e+25 ± 1.1e+19 = approx(1.1e+25)

​ test_few_failures.p$VVHUWLRQ(UURr
​ 1 failed, 4 passed in 0.06 seconds
MaEHou can spot the problem right off the bat. But let’s pretend the test is longer and more
complicated, and it’s not obvious what’s wrong. Let’s run the test again to see the failure again.
You can specifWKHWHVWFDVHRQWKHFRPPDQGOLQH: ​ ​$ ​​pWHVt​​ ​​-q​​ ​​"test_few_failures.py::test_a[1e+25-1e+23-1.1e+25]"​
If RXGRQWZDQWWRFRS/paste or there are multiple failed cases RXGOLNHWRUHUXQOILVPXFh
easier. And if RXUHUHDOO debugging a test failure, another flag that might make things easier is
--showlocals, or -l for short:
\050101\051

​ ​$ ​​pWHVt​​ ​​-q​​ ​​--lf​​ ​​-l​​ ​​test_few_failures.py​
​ F
​ ====================== FAILURES ======================
​ ____________ test_a[1e+25-1e+23-1.1e+25] _____________

​ x = 1e+25,  HH[SHFWHG H5

​ @pWHVWPDUNSDUDPHWUL]H [,expected", testdata)
​ def test_a(x,H[SHFWHG :
​ sum_ = x + y
​ > assert sum_ == approx(expected)
​ E assert 1.01e+25 == 1.1e+25 ± 1.1e+19
​ E + where 1.1e+25 ± 1.1e+19 = approx(1.1e+25)

​ expected = 1.1e+25
​ sum_ = 1.01e+25
​ x = 1e+25
​  H3

​ test_few_failures.p$VVHUWLRQ(UURr
​ ================= 4 tests deselected =================
​ 1 failed, 4 deselected in 0.05 seconds
The reason for the failure should be more obvious now.
To pull off the trick of remembering what test failed last time, pWHVWVWRUHVWHVWIDLOXUe
information from the last test session. You can see the stored information with --cache-show: ​ ​$ ​​pWHVt​​ ​​--cache-show​
​ ===================== test session starts ======================
​ ------------------------- cache values -------------------------
​ cache/lastfailed contains:
​ {'test_few_failures.pWHVWBD>HHH@
7UXH}

​ ================= no tests ran in 0.00 seconds =================
Or RXFDQORRNLQWKHFDFKHGLU: ​ ​$ ​​cat​​ ​​.cache/v/cache/lastfailed​
​ {
​ "test_few_failures.pWHVWBD>HHH@WUXe
​ }
You can pass in --clear-cache to clear the cache before the session.
\050102\051

The cache can be used for more than just --lf and --ff. Let’s make a fixture that records how long
tests take, saves the times, and on the next run, reports an error on tests that take longer than, sa,
twice as long as last time.
The interface for the cache fixture is simply​ cache.get(keGHIDXOW)
​ cache.set(keYDOXH)
BFRQYHQWLRQNH names start with the name of RXUDSSOLFDWLRQRUSOXJLQIROORZHGE a /, and
continuing to separate sections of the keQDPHZLWKV7KHYDOXHou store can be anWKLQJWKDt
is convertible to json, since that’s how it’s represented in the .cache director.
Here’s our fixture used to time tests:
ch4/cache/test_slower.py
​ @pWHVWIL[WXUH DXWRXVH 7UXH)
​ ​def​ check_duration(request, cache):
​ ke 'duration/'​ + request.node.nodeid.replace(​':'​, ​'_'​)
​ ​# nodeid's can have colons​
​ ​# keys become filenames within .cache ​
​ ​# replace colons with something filename safe ​
​ start_time = datetime.datetime.now()
​ ​LHOd​
​ stop_time = datetime.datetime.now()
​ this_duration = (stop_time - start_time).total_seconds()
​ last_duration = cache.get(ke1RQH)
​ cache.set(keWKLVBGXUDWLRQ)
​ ​if​ last_duration ​is​ ​not​ None:
​ errorstring = ​"test duration over 2x last duration"​
​ ​assert​ this_duration <= last_duration * 2, errorstring
The fixture is autouse, so it doesn’t need to be referenced from the test. The request object is
used to grab the nodeid for use in the ke7KHQRGHLGLVDXQLTXHLGHQWLILHUWKDWZRUNVHYHQZLWh
parametrized tests. We prepend the keZLWKGXUDWLRQWREHJRRGFDFKHFLWL]HQV7KHFRGe
above LHOGUXQVEHIRUHWKHWHVWIXQFWLRQWKHFRGHDIWHUield happens after the test function.
Now we need some tests that take different amounts of time:
ch4/cache/test_slower.py
​ @pWHVWPDUNSDUDPHWUL]H 'i'​, range(5))
​ ​def​ test_slow_stuff(i):
​ time.sleep(random.random())
Because RXSUREDEO don’t want to write a bunch of tests for this, I used random and
parametrization to easilJHQHUDWHVRPHWHVWVWKDWVOHHSIRUDUDQGRPDPRXQWRIWLPHDOOVKRUWHr
than a second. Let’s see it run a couple of times:
\050103\051

​ ​$ ​​cd​​ ​​/path/to/code/ch4/cache​
​ ​$ ​​pWHVt​​ ​​-q​​ ​​--cache-clear ​​ ​​test_slower.py​
​ .....
​ 5 passed in 2.10 seconds
​ ​$ ​​pWHVt​​ ​​-q​​ ​​--tb=line ​​ ​​test_slower.py​
​ .E..E.E.



​ ============================ ERRORS ============================
​ ___________ ERROR at teardown of test_slow_stuff[0] ____________
​ E AssertionError: test duration over 2x last duration
​ assert 0.954312 <= (0.380536 * 2)
​ ___________ ERROR at teardown of test_slow_stuff[2] ____________
​ E AssertionError: test duration over 2x last duration
​ assert 0.821745 <= (0.152405 * 2)
​ ___________ ERROR at teardown of test_slow_stuff[3] ____________
​ E AssertionError: test duration over 2x last duration
​ assert 1.001032 <= (0.36674 * 2)
​ 5 passed, 3 error in 3.83 seconds
Well, that was fun. Let’s see what’s in the cache: ​ ​$ ​​pWHVt​​ ​​-q​​ ​​--cache-show​
​ ------------------------- cache values -------------------------
​ cache/lastfailed contains:
​ {'test_slower.pWHVWBVORZBVWXII>@
7UXH,
​ 'test_slower.pWHVWBVORZBVWXII>@
7UXH,
​ 'test_slower.pWHVWBVORZBVWXII>@
7UXH}
​ duration/test_slower.pBBWHVWBVORZBVWXII>@FRQWDLQV:
​ 0.954312
​ duration/test_slower.pBBWHVWBVORZBVWXII>@FRQWDLQV:
​ 0.915539
​ duration/test_slower.pBBWHVWBVORZBVWXII>@FRQWDLQV:
​ 0.821745
​ duration/test_slower.pBBWHVWBVORZBVWXII>@FRQWDLQV:
​ 1.001032
​ duration/test_slower.pBBWHVWBVORZBVWXII>@FRQWDLQV:
​ 0.031884

​ no tests ran in 0.01 seconds
You can easilVHHWKHGXUDWLRQGDWDVHSDUDWHIURPWKHFDFKHGDWDGXHWRWKHSUHIL[LQJRIFDFKe
\050104\051

data names. However, it’s interesting that the lastfailed functionalitLVDEOHWRRSHUDWHZLWKRQe
cache entr2XUGXUDWLRQGDWDLVWDNLQJXSRQHFDFKHHQWU per test. Let’s follow the lead of
lastfailed and fit our data into one entr.
We are reading and writing to the cache for everWHVW:HFRXOGVSOLWXSWKHIL[WXUHLQWRa
function scope fixture to measure durations and a session scope fixture to read and write to the
cache. However, if we do this, we can’t use the cache fixture because it has function scope.
FortunatelDTXLFNSHHNDWWKHLPSOHPHQWDWLRQRQ*LW+Xb[9]
reveals that the cache fixture is
simplUHWXUQLQJUHTXHVWFRQILJFDFKH7KLVLVDYDLODEOHLQDQ scope.
Here’s one possible refactoring of the same functionalit:
ch4/cache/test_slower_2.py
​ Duration = namedtuple(​'Duration'​, [​'current'​, ​'last'​])


​ @pWHVWIL[WXUH VFRSH 'session'​)
​ ​def​ duration_cache(request):
​ ke 'duration/testdurations'​
​ d = Duration({}, request.config.cache.get(ke^` )
​ ​LHOd​ d
​ request.config.cache.set(keGFXUUHQW)


​ @pWHVWIL[WXUH DXWRXVH 7UXH)
​ ​def​ check_duration(request, duration_cache):
​ d = duration_cache
​ nodeid = request.node.nodeid
​ start_time = datetime.datetime.now()
​ ​LHOd​
​ duration = (datetime.datetime.now() - start_time).total_seconds()
​ d.current[nodeid] = duration
​ ​if​ d.last.get(nodeid, None) ​is​ ​not​ None:
​ errorstring = ​"test duration over 2x last duration"​
​ ​assert​ duration <= (d.last[nodeid] * 2), errorstring
The duration_cache fixture is session scope. It reads the previous entrRUDQHPSW dictionarLf
there is no previous cached data, before anWHVWVDUHUXQ,QWKHSUHYLRXVFRGHZHVDYHGERWKWKe
retrieved dictionarDQGDQHPSW one in a namedtuple called Duration with accessors current
and last. We then passed that namedtuple to the check_duration fixture, which is function scope
and runs for everWHVWIXQFWLRQ$VWKHWHVWUXQVWKHVDPHQDPHGWXSOHLVSDVVHGWRHDFKWHVWDQd
the times for the current test runs are stored in the d.current dictionar$WWKHHQGRIWKHWHVt
session, the collected current dictionarLVVDYHGLQWKHFDFKH.
After running it a couple of times, let’s look at the saved cache:
\050105\051

​ ​$ ​​pWHVt​​ ​​-q​​ ​​--cache-clear​​ ​​test_slower_2.py​
​ .....
​ 5 passed in 2.80 seconds
​ ​$ ​​pWHVt​​ ​​-q​​ ​​--tb=no​​ ​​test_slower_2.py​
​ ​...​​E..E​
​ 5 passed, 2 error in 1.97 seconds
​ ​$ ​​pWHVt​​ ​​-q​​ ​​--cache-show​
​ ------------------------- cache values -------------------------
​ cache/lastfailed contains:
​ {'test_slower_2.pWHVWBVORZBVWXII>@
7UXH,
​ 'test_slower_2.pWHVWBVORZBVWXII>@
7UXH}
​ duration/testdurations contains:
​ {'test_slower_2.pWHVWBVORZBVWXII>@
,
​ 'test_slower_2.pWHVWBVORZBVWXII>@
,
​ 'test_slower_2.pWHVWBVORZBVWXII>@
,
​ 'test_slower_2.pWHVWBVORZBVWXII>@
,
​ 'test_slower_2.pWHVWBVORZBVWXII>@
}

​ no tests ran in 0.01 seconds
That looks better.
\050106\051

Using capss
The capsVEXLOWLQIL[WXUHSURYLGHVWZRELWVRIIXQFWLRQDOLW: it allows RXWRUHWULHYHVWGRXWDQd
stderr from some code, and it disables output capture temporaril/HWVWDNHDORRNDWUHWULHYLQg
stdout and stderr.
Suppose RXKDYHDIXQFWLRQWRSULQWDJUHHWLQJWRVWGRXW:
ch4/cap/test_capsVSy
​ ​def​ greeting(name):
​ ​print​(​'Hi, {}'​.format(name))
You can’t test it bFKHFNLQJWKHUHWXUQYDOXH<RXKDYHWRWHVWVWGRXWVRPHKRZ<RXFDQWHVWWKe
output bXVLQJFDSVs:
ch4/cap/test_capsVSy
​ ​def​ test_greeting(capsV :
​ greeting(​'Earthling'​)
​ out, err = capsVUHDGRXWHUU )
​ ​assert​ out == ​'Hi, Earthling​​\n​​'​
​ ​assert​ err == ​''​

​ greeting(​'Brian'​)
​ greeting(​'Nerd'​)
​ out, err = capsVUHDGRXWHUU )
​ ​assert​ out == ​'Hi, Brian​​\n​​Hi, Nerd​​\n​​'​
​ ​assert​ err == ​''​
The captured stdout and stderr are retrieved from capsVUHGRXWHUU 7KHUHWXUQYDOXHLs
whatever has been captured since the beginning of the function, or from the last time it was
called.
The previous example onlXVHGVWGRXW/HWVORRNDWDQH[DPSOHXVLQJVWGHUU:
ch4/cap/test_capsVSy
​ ​def​ LNHV SUREOHP :
​ ​print​(​'YIKES! {}'​.format(problem), file=sVVWGHUU)


​ ​def​ test_LNHV FDSVs):
​ LNHV 'Out of coffee!'​)
​ out, err = capsVUHDGRXWHUU )
​ ​assert​ out == ​''​
​ ​assert​ ​'Out of coffee!'​ ​in​ err
\050107\051

pWHVWXVXDOO captures the output from RXUWHVWVDQGWKHFRGHXQGHUWHVW7KLVLQFOXGHVSULQt
statements. The captured output is displaHGIRUIDLOLQJWHVWVRQO after the full test session is
complete. The -s option turns off this feature, and output is sent to stdout while the tests are
running. UsuallWKLVZRUNVJUHDWDVLWVWKHRXWSXWIURPWKHIDLOHGWHVWVou need to see in order
to debug the failures. However, RXPD want to allow some output to make it through the
default pWHVWRXWSXWFDSWXUHWRSULQWVRPHWKLQJVZLWKRXWSULQWLQJHYHUthing. You can do this
with capsV<RXFDQXVHFDSVs.disabled() to temporarilOHWRXWSXWJHWSDVWWKHFDSWXUe
mechanism.
Here’s an example:
ch4/cap/test_capsVSy
​ ​def​ test_capsVBGLVDEOHG FDSVs):
​ ​with​ capsVGLVDEOHG :
​ ​print​(​'​​\n​​always print this'​)
​ ​print​(​'normal print, usually captured'​)
Now, ’alwaVSULQWWKLVZLOODOZDs be output: ​ ​$ ​​cd​​ ​​/path/to/code/ch4/cap​
​ ​$ ​​pWHVt​​ ​​-q​​ ​​test_capsVS::test_capsVBGLVDEOHd​

​ alwaVSULQWWKLs
​ .
​ 1 passed in 0.01 seconds
​ ​$ ​​pWHVt​​ ​​-q​​ ​​-s​​ ​​test_capsVS::test_capsVBGLVDEOHd​

​ alwaVSULQWWKLs
​ normal print, usuallFDSWXUHd
​ .
​ 1 passed in 0.00 seconds
As RXFDQVHHDOZDs print this shows up with or without output capturing, since it’s being
printed from within a with capsVGLVDEOHG EORFN7KHRWKHUSULQWVWDWHPHQWLVMXVWDQRUPDl
print statement, so normal print, usuallFDSWXUHGLVRQO seen in the output when we pass in the -
s flag, which is a shortcut for --capture=no, turning off output capture.
\050108\051

Using monkeSDWFh
A “monkeSDWFKLVDGnamic modification of a class or module during runtime. During
testing, “monkeSDWFKLQJLVDFRQYHQLHQWZD to take over part of the runtime environment of
the code under test and replace either input dependencies or output dependencies with objects or
functions that are more convenient for testing. The monkeSDWFKEXLOWLQIL[WXUHDOORZVou to do
this in the context of a single test. And when the test ends, regardless of pass or fail, the original
unpatched is restored, undoing everWKLQJFKDQJHGE the patch. It’s all verKDQGZDY until we
jump into some examples. After looking at the API, we’ll look at how monkeSDWFKLVXVHGLn
test code.
The monkeSDWFKIL[WXUHSURYLGHVWKHIROORZLQJIXQFWLRQV:
setattr(target, name, value=, raising=True): Set an attribute.
delattr(target, name=, raising=True): Delete an attribute.
setitem(dic, name, value): Set a dictionarHQWU.
delitem(dic, name, raising=True): Delete a dictionarHQWU.
setenv(name, value, prepend=None): Set an environmental variable.
delenv(name, raising=True): Delete an environmental variable.
sVSDWKBSUHSHQG SDWK 3UHSHQGSDWKWRVs.path, which is PWKRQVOLVWRILPSRUt
locations.
chdir(path): Change the current working director.
The raising parameter tells pWHVWZKHWKHURUQRWWRUDLVHDQH[FHSWLRQLIWKHLWHPGRHVQWDOUHDGy
exist. The prepend parameter to setenv() can be a character. If it is set, the value of the
environmental variable will be changed to value + prepend + .
To see monkeSDWFKLQDFWLRQOHWVORRNDWFRGHWKDWZULWHVDGRWFRQILJXUDWLRQILOH7Ke
behavior of some programs can be changed with preferences and values set in a dot file in a
user’s home director+HUHVDELWRIFRGHWKDWUHDGVDQGZULWHVDFKHHVHSUHIHUHQFHVILOH:
ch4/monkeFKHHVHSy
​ ​import​ os
​ ​import​ json


​ ​def​ read_cheese_preferences():
​ full_path = os.path.expanduser(​'~/.cheese.json'​)
​ ​with​ open(full_path, ​'r'​) ​as​ f:
​ prefs = json.load(f)
​ ​return​ prefs
\050109\051



​ ​def​ write_cheese_preferences(prefs):
​ full_path = os.path.expanduser(​'~/.cheese.json'​)
​ ​with​ open(full_path, ​'w'​) ​as​ f:
​ json.dump(prefs, f, indent=4)


​ ​def​ write_default_cheese_preferences():
​ write_cheese_preferences(_default_prefs)
​ _default_prefs = {
​ ​'slicing'​: [​'manchego'​, ​'sharp cheddar'​],
​ ​'spreadable'​: [​'Saint Andre'​, ​'camembert'​,
​ ​'bucheron'​, ​'goat'​, ​'humbolt fog'​, ​'cambozola'​],
​ ​'salads'​: [​'crumbled feta'​]
​ }
Let’s take a look at how we could test write_default_cheese_preferences(). It’s a function that
takes no parameters and doesn’t return anWKLQJ%XWLWGRHVKDYHDVLGHHIIHFWWKDWZHFDQWHVW,t
writes a file to the current user’s home director.
One approach is to just let it run normallDQGFKHFNWKHVLGHHIIHFW6XSSRVH,DOUHDG have tests
for read_cheese_preferences() and I trust them, so I can use them in the testing of
write_default_cheese_preferences():
ch4/monkeWHVWBFKHHVHSy
​ ​def​ test_def_prefs_full():
​ cheese.write_default_cheese_preferences()
​ expected = cheese._default_prefs
​ actual = cheese.read_cheese_preferences()
​ ​assert​ expected == actual
One problem with this is that anRQHZKRUXQVWKLVWHVWFRGHZLOORYHUZULWHWKHLURZQFKHHVe
preferences file. That’s not good.
If a user has HOME set, os.path.expanduser() replaces ~ with whatever is in a user’s HOME
environmental variable. Let’s create a temporarGLUHFWRU and redirect HOME to point to that
new temporarGLUHFWRU:
ch4/monkeWHVWBFKHHVHSy
​ ​def​ test_def_prefs_change_home(tmpdir, monkeSDWFK :
​ monkeSDWFKVHWHQY 'HOME'​, tmpdir.mkdir(​'home'​))
​ cheese.write_default_cheese_preferences()
​ expected = cheese._default_prefs
​ actual = cheese.read_cheese_preferences()
\050110\051

​ ​assert​ expected == actual
This is a prettJRRGWHVWEXWUHOing on HOME seems a little operating-sVWHPGHSHQGHQW$Qd
a peek into the documentation online for expanduser() has some troubling information, including
“On Windows, HOME and USERPROFILE will be used if set, otherwise a combination
of….” [10]
Dang. That maQRWEHJRRGIRUVRPHRQHUXQQLQJWKHWHVWRQ:LQGRZV0Dbe we
should take a different approach.
Instead of patching the HOME environmental variable, let’s patch expanduser:
ch4/monkeWHVWBFKHHVHSy
​ ​def​ test_def_prefs_change_expanduser(tmpdir, monkeSDWFK :
​ fake_home_dir = tmpdir.mkdir(​'home'​)
​ monkeSDWFKVHWDWWU FKHHVHRVSDWK'expanduser'​,
​ (​lambda​ x: x.replace(​'~'​, str(fake_home_dir))))
​ cheese.write_default_cheese_preferences()
​ expected = cheese._default_prefs
​ actual = cheese.read_cheese_preferences()
​ ​assert​ expected == actual
During the test, anWKLQJLQWKHFKHHVHPRGXOHWKDWFDOOVRVSDWKH[SDQGXVHU JHWVRXUODPEGa
expression instead. This little function uses the regular expression module function re.sub to
replace ~ with our new temporarGLUHFWRU. Now we’ve used setenv() and setattr() to do
patching of environmental variables and attributes. Next up, setitem().
Let’s saZHUHZRUULHGDERXWZKDWKDSSHQVLIWKHILOHDOUHDG exists. We want to be sure it gets
overwritten with the defaults when write_default_cheese_preferences() is called:
ch4/monkeWHVWBFKHHVHSy
​ ​def​ test_def_prefs_change_defaults(tmpdir, monkeSDWFK :
​ ​# write the file once ​
​ fake_home_dir = tmpdir.mkdir(​'home'​)
​ monkeSDWFKVHWDWWU FKHHVHRVSDWK'expanduser'​,
​ (​lambda​ x: x.replace(​'~'​, str(fake_home_dir))))
​ cheese.write_default_cheese_preferences()
​ defaults_before = copGHHSFRS(cheese._default_prefs)

​ ​# change the defaults​
​ monkeSDWFKVHWLWHP FKHHVHBGHIDXOWBSUHIV'slicing'​, [​'provolone'​])
​ monkeSDWFKVHWLWHP FKHHVHBGHIDXOWBSUHIV'spreadable'​, [​'brie'​])
​ monkeSDWFKVHWLWHP FKHHVHBGHIDXOWBSUHIV'salads'​, [​'pepper jack'​])
​ defaults_modified = cheese._default_prefs

​ ​# write it again with modified defaults​
​ cheese.write_default_cheese_preferences()

\050111\051

​ ​# read, and check​
​ actual = cheese.read_cheese_preferences()
​ ​assert​ defaults_modified == actual
​ ​assert​ defaults_modified != defaults_before
Because _default_prefs is a dictionarZHFDQXVHPRQNHpatch.setitem() to change dictionary
items just for the duration of the test.
We’ve used setenv(), setattr(), and setitem(). The del forms are prettVLPLODU7KH just delete an
environmental variable, attribute, or dictionarLWHPLQVWHDGRIVHWWLQJVRPHWKLQJ7KHODVWWZo
monkeSDWFKPHWKRGVSHUWDLQWRSDWKV.
sVSDWKBSUHSHQG SDWK SUHSHQGVDSDWKWRVs.path, which has the effect of putting RXUQHZSDWh
at the head of the line for module import directories. One use for this would be to replace a
sVWHPZLGHPRGXOHRUSDFNDJHZLWKDVWXEYHUVLRQ<RXFDQWKHQXVe
monkeSDWFKVspath_prepend() to prepend the directorRIour stub version and the code under
test will find the stub version first.
chdir(path) changes the current working directorGXULQJWKHWHVW7KLVZRXOGEHXVHIXOIRr
testing command-line scripts and other utilities that depend on what the current working
directorLV<RXFRXOGVHWXSDWHPSRUDU directorZLWKZKDWHYHUFRQWHQWVPDNHVHQVHIRUour
script, and then use monkeSDWFKFKGLU WKHBWPSGLU .
You can also use the monkeSDWFKIL[WXUHIXQFWLRQVLQFRQMXQFWLRQZLWKXQLWWHVWPRFNWo
temporarilUHSODFHDWWULEXWHVZLWKPRFNREMHFWV<RXOOORRNDWWKDWLQ&KDSWHU ​
Using pytest
with Other Tools ​.
\050112\051

Using doctest_namespace
The doctest module is part of the standard PWKRQOLEUDU and allows RXWRSXWOLWWOHFRGe
examples inside docstrings for a function and test them to make sure theZRUN<RXFDQKDYe
pWHVWORRNIRUDQGUXQGRFWHVWWHVWVZLWKLQour PWKRQFRGHE using the --doctest-modules flag.
With the doctest_namespace builtin fixture, RXFDQEXLOGDXWRXVHIL[WXUHVWRDGGVmbols to the
namespace pWHVWXVHVZKLOHUXQQLQJGRFWHVWWHVWV7KLVDOORZVGRFVWULQJVWREHPXFKPRUe
readable. doctest_namespace is commonlXVHGWRDGGPRGXOHLPSRUWVLQWRWKHQDPHVSDFH,
especiallZKHQ3thon convention is to shorten the module or package name. For instance,
numpLVRIWHQLPSRUWHGZLWKLPSRUWQXPS as np.
Let’s plaZLWKDQH[DPSOH/HWVVD we have a module named unnecessarBPDWKS with
multipl DQGGLYLGH PHWKRGVWKDWZHUHDOO want to make sure everRQHXQGHUVWDQGVFOHDUO.
So we throw some usage examples in both the file docstring and the docstrings of the functions:
ch4/dt/1/unnecessarBPDWKSy
​ ​"""​
​ ​This module defines multiply(a, b) and divide(a, b).​

​ ​>>> import unnecessary_math as um​

​ ​Here's how you use multiply:​

​ ​>>> um.multiply(4, 3)​
​ ​12​
​ ​>>> um.multiply('a', 3)​
​ ​'aaa'​


​ ​Here's how you use divide:​

​ ​>>> um.divide(10, 5)​
​ ​2.0​
​ ​"""​


​ ​def​ multipl DE :
​ ​"""​
​ ​ Returns a multiplied by b.​

​ ​ >>> um.multiply(4, 3)​
​ ​ 12​
​ ​ >>> um.multiply('a', 3)​
\050113\051

​ ​ 'aaa'​
​ ​ """​
​ ​return​ a * b


​ ​def​ divide(a, b):
​ ​"""​
​ ​ Returns a divided by b.​

​ ​ >>> um.divide(10, 5)​
​ ​ 2.0​
​ ​ """​
​ ​return​ a / b
Since the name unnecessarBPDWKLVORQJZHGHFLGHWRXVHXPLQVWHDGE using import
unnecessarBPDWKDVXPLQWKHWRSGRFVWULQJ7KHFRGHLQWKHGRFVWULQJVRIWKHIXQFWLRQVGRHVQt
include the import statement, but continue with the um convention. The problem is that pWHVt
treats each docstring with code as a different test. The import in the top docstring will allow the
first part to pass, but the code in the docstrings of the functions will fail: ​ ​$ ​​cd​​ ​​/path/to/code/ch4/dt/1​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​--doctest-modules​​ ​​--tb=short​​ ​​unnecessarBPDWKSy​
​ ======================== test session starts ========================
​ collected 3 items

​ unnecessarBPDWKS::unnecessarBPDWK3$66(D
​ unnecessarBPDWKS::unnecessarBPDWKGLYLGH)$,/(D
​ unnecessarBPDWKS::unnecessarBPDWKPXOWLSO FAILED

​ ============================= FAILURES
==============================
​ _________________ [doctest] unnecessarBPDWKGLYLGHBBBBBBBBBBBBBBBB_
​ 031
​ 032 Returns a divided bE.
​ 033
​ 034 >>> um.divide(10, 5)
​ UNEXPECTED EXCEPTION: NameError("name 'um' is not defined",)
​ Traceback (most recent call last):
​ ​...​
​ File "@!OLQHLQPRGXOH>

​ NameError: name 'um' is not defined

\050114\051

​ /path/to/code/ch4/dt/1/unnecessarBPDWKS:34: UnexpectedException
​ ________________ [doctest] unnecessarBPDWKPXOWLSO ________________
​ 022
​ 023 >>> um.multipl )
​ UNEXPECTED EXCEPTION: NameError("name 'um' is not defined",)
​ Traceback (most recent call last):
​ ​...​
​ File "", line 1, in

​ NameError: name 'um' is not defined

​ /path/to/code/ch4/dt/1/unnecessarBPDWKS:23: UnexpectedException
​ ================ 2 failed, 1 passed in 0.03 seconds =================
One waWRIL[LWLVWRSXWWKHLPSRUWVWDWHPHQWLQHDFKGRFVWULQJ:
ch4/dt/2/unnecessarBPDWKSy
​ ​def​ multipl DE :
​ ​"""​
​ ​ Returns a multiplied by b.​

​ ​ >>> import unnecessary_math as um​
​ ​ >>> um.multiply(4, 3)​
​ ​ 12​
​ ​ >>> um.multiply('a', 3)​
​ ​ 'aaa'​
​ ​ """​
​ ​return​ a * b


​ ​def​ divide(a, b):
​ ​"""​
​ ​ Returns a divided by b.​

​ ​ >>> import unnecessary_math as um​
​ ​ >>> um.divide(10, 5)​
​ ​ 2.0​
​ ​ """​
​ ​return​ a / b
This definitelIL[HVWKHSUREOHP: ​ ​$ ​​cd​​ ​​/path/to/code/ch4/dt/2​
\050115\051

​ ​$ ​​pWHVt​​ ​​-v​​ ​​--doctest-modules​​ ​​--tb=short​​ ​​unnecessarBPDWKSy​
​ ======================== test session starts ========================
​ collected 3 items

​ unnecessarBPDWKS::unnecessarBPDWK3$66(D
​ unnecessarBPDWKS::unnecessarBPDWKGLYLGH3$66(D
​ unnecessarBPDWKS::unnecessarBPDWKPXOWLSO PASSED

​ ===================== 3 passed in 0.03 seconds ======================
However, it also clutters the docstrings, and doesn’t add anUHDOYDOXHWRUHDGHUVRIWKHFRGH.
The builtin fixture doctest_namespace, used in an autouse fixture at a top-level conftest.pILOH,
will fix the problem without changing the source code:
ch4/dt/3/conftest.py
​ ​import​ pWHVt
​ ​import​ unnecessarBPDWh


​ @pWHVWIL[WXUH DXWRXVH 7UXH)
​ ​def​ add_um(doctest_namespace):
​ doctest_namespace[​'um'​] = unnecessarBPDWh
This tells pWHVWWRDGGWKHXPQDPHWRWKHGRFWHVWBQDPHVSDFHDQGKDYHLWEHWKHYDOXHRIWKe
imported unnecessarBPDWKPRGXOH:LWKWKLVLQSODFHLQWKHFRQIWHVWS file, anGRFWHVWVIRXQd
within the scope of this conftest.pILOHZLOOKDYHWKHXPVmbol defined.
I’ll cover running doctest from pWHVWPRUHLQ&KDSWHU ​
Using pytest with Other Tools ​.
\050116\051

Using recwarn
The recwarn builtin fixture is used to examine warnings generated bFRGHXQGHUWHVW,Q3thon,
RXFDQDGGZDUQLQJVWKDWZRUNDORWOLNHDVVHUWLRQVEXWDUHXVHGIRUWKLQJVWKDWGRQWQHHGWo
stop execution. For example, suppose we want to stop supporting a function that we wish we had
never put into a package but was released for others to use. We can put a warning in the code and
leave it there for a release or two:
ch4/test_warnings.py
​ ​import​ warnings
​ ​import​ pWHVt


​ ​def​ lame_function():
​ warnings.warn(​"Please stop using this"​, DeprecationWarning)
​ ​# rest of function​
We can make sure the warning is getting issued correctlZLWKDWHVW:
ch4/test_warnings.py
​ ​def​ test_lame_function(recwarn):
​ lame_function()
​ ​assert​ len(recwarn) == 1
​ w = recwarn.pop()
​ ​assert​ w.categor 'HSUHFDWLRQ:DUQLQg
​ ​assert​ str(w.message) == ​'Please stop using this'​
The recwarn value acts like a list of warnings, and each warning in the list has a categor,
message, filename, and lineno defined, as shown in the code.
The warnings are collected at the beginning of the test. If that is inconvenient because the portion
of the test where RXFDUHDERXWZDUQLQJVLVQHDUWKHHQGou can use recwarn.clear() to clear
out the list before the chunk of the test where RXGRFDUHDERXWFROOHFWLQJZDUQLQJV.
In addition to recwarn, pWHVWFDQFKHFNIRUZDUQLQJVZLWKStest.warns():
ch4/test_warnings.py
​ ​def​ test_lame_function_2():
​ ​with​ pWHVWZDUQV 1RQH as​ warning_list:
​ lame_function()

​ ​assert​ len(warning_list) == 1
​ w = warning_list.pop()
​ ​assert​ w.categor 'HSUHFDWLRQ:DUQLQg
​ ​assert​ str(w.message) == ​'Please stop using this'​
\050117\051

The pWHVWZDUQV FRQWH[WPDQDJHUSURYLGHVDQHOHJDQWZD to demark what portion of the code
RXUHFKHFNLQJZDUQLQJV7KHUHFZDUQIL[WXUHDQGWKHStest.warns() context manager provide
similar functionalitWKRXJKVRWKHGHFLVLRQRIZKLFKWRXVHLVSXUHO a matter of taste.
\050118\051

Exercises1. In ch4/cache/test_slower.pWKHUHLVDQDXWRXVHIL[WXUHFDOOHGFKHFNBGXUDWLRQ &RS itinto ch3/tasks_proj/tests/conftest.p.
2. Run the tests in Chapter 3.
3. For tests that are reallIDVW[UHDOO fast is still reallIDVW,QVWHDGRI[FKDQJHWKe fixture to check for 0.1 second plus 2x the last duration.
4. Run pWHVWZLWKWKHPRGLILHGIL[WXUH'RWKHUHVXOWVVHHPUHDVRQDEOH?
\050119\051

What’s Next
In this chapter, RXORRNHGDWPDQ of pWHVWVEXLOWLQIL[WXUHV1H[Wou’ll take a closer look at
plugins. The nuance of writing large plugins could be a book in itself; however, small custom
plugins are a regular part of the pWHVWHFRVstem.
Footnotes
[8]
http://pUHDGWKHGRFVLRHQODWHVWSDWKKWPl
[9]
https://github.com/pWHVWGHYStest/blob/master/_pWHVWFDFKHSURYLGHUSy
[10]
https://docs.pWKRQRUJOLEUDU/os.path.html#os.path.expanduser
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050120\051

Chapter 5
Plugins
As powerful as pWHVWLVULJKWRXWRIWKHER[LWJHWVHYHQEHWWHUZKHQou add plugins to the mix.
The pWHVWFRGHEDVHLVVWUXFWXUHGZLWKFXVWRPL]DWLRQDQGH[WHQVLRQVDQGWKHUHDUHKRRNs
available to allow modifications and improvements through plugins.
It might surprise RXWRNQRZWKDWou’ve alreadZULWWHQVRPHSOXJLQVLIou’ve worked through
the previous chapters in this book. AnWLPHou put fixtures and/or hook functions into a
project’s top-level conftest.pILOHou created a local conftest plugin. It’s just a little bit of extra
work to convert these conftest.pILOHVLQWRLQVWDOODEOHSOXJLQVWKDWou can share between
projects, with other people, or with the world.
We will start this chapter looking at where to look for third-partSOXJLQV4XLWHDIHZSOXJLQs
are available, so there’s a decent chance someone has alreadZULWWHQWKHFKDQJHou want to
make to pWHVW6LQFHZHZLOOEHORRNLQJDWRSHQVRXUFHSOXJLQVLIDSOXJLQGRHVDOPRVWZKDt
RXZDQWWRGREXWQRWTXLWHou can fork it, or use it as a reference for creating RXURZn
plugin. While this chapter is about creating RXURZQSOXJLQV$SSHQGL[ ​
Plugin Sampler Pack ​
is included to give RXDWDVWHRIZKDWVSRVVLEOH.
In this chapter, RXOOOHDUQKRZWRFUHDWHSOXJLQVDQG,OOSRLQWou in the right direction to test,
package, and distribute them. The full topic of PWKRQSDFNDJLQJDQGGLVWULEXWLRQLVSUREDEO a
book of its own, so we won’t cover everWKLQJ%XWou’ll get far enough to be able to share
plugins with RXUWHDP,OODOVRGLVFXVVVRPHVKRUWFXWVWRJHWWLQJ3PI–distributed plugins up
with the least amount of work.
\050121\051

Finding Plugins
You can find third-partStest plugins in several places. The plugins listed in Appendix 3,
Plugin Sampler Pack
​ are all available for download from P3,+RZHYHUWKDWVQRWWKHRQOy
place to look for great pWHVWSOXJLQV.
https://docs.pytest.org/en/latest/plugins.html
The main pWHVWGRFXPHQWDWLRQVLWHKDVDSDJHWKDWWDONVDERXWLQVWDOOLQJDQGXVLQJStest
plugins, and lists a few common plugins.
https://pypi.python.org The PWKRQ3DFNDJH,QGH[ 3PI) is a great place to get lots of PWKRQSDFNDJHVEXWLWLs
also a great place to find pWHVWSOXJLQV:KHQORRNLQJIRUStest plugins, it should work
prettZHOOWRHQWHUStest,” “pWHVWRUStest” into the search box, since most pWHVt
plugins either start with “pWHVWRUHQGLQStest.”
https://github.com/pytest-dev The “pWHVWGHYJURXSRQ*LW+XELVZKHUHWKHStest source code is kept. It’s also where
RXFDQILQGVRPHSRSXODUStest plugins that are intended to be maintained long-term by
the pWHVWFRUHWHDP.
\050122\051

Installing Plugins
pWHVWSOXJLQVDUHLQVWDOOHGZLWKSLSMXVWOLNHRWKHU3thon packages. However, RXFDQXVHSLp
in several different waVWRLQVWDOOSOXJLQV.
Install from P3I
As P3,LVWKHGHIDXOWORFDWLRQIRUSLSLQVWDOOLQJSOXJLQVIURP3PI is the easiest method. Let’s
install the pWHVWFRYSOXJLQ:​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWFRv​
This installs the latest stable version from P3,.
Install a Particular Version from P3I
If RXZDQWDSDUWLFXODUYHUVLRQRIDSOXJLQou can specifWKHYHUVLRQDIWHU : ​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWFRY 0​
Install from a .tar.gz or .whl File
Packages on P3,DUHGLVWULEXWHGDV]LSSHGILOHVZLWKWKHH[WHQVLRQVWDUJ]DQGRUZKO7KHVHDUe
often referred to as “tar balls” and “wheels.” If RXUHKDYLQJWURXEOHJHWWLQJSLSWRZRUNZLWh
P3,GLUHFWO (which can happen with firewalls and other network complications), RXFDn
download either the .tar.gz or the .whl and install from that.
You don’t have to unzip or anWKLQJMXVWSRLQWSLSDWLW: ​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWFRYWDUJz ​
​ ​# or​
​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWBFRYS2.pQRQHDQ.whl ​
Install from a Local Directory
You can keep a local stash of plugins (and other PWKRQSDFNDJHV LQDORFDORUVKDUHGGLUHFWRUy
in .tar.gz or .whl format and use that instead of P3,IRULQVWDOOLQJSOXJLQV: ​ ​$ ​​mkdir ​​ ​​some_plugins​
​ ​$ ​​cp​​ ​​pWHVWBFRYS2.pQRQHDQ.whl ​​ ​​some_plugins/​
​ ​$ ​​pip​​ ​​install ​​ ​​--no-index​​ ​​--find-links=./some_plugins/ ​​ ​​pWHVWFRv​
The --no-index tells pip to not connect to P3,7KHILQGOLQNV VRPHBSOXJLQVWHOOVSLSWRORRk
in the directorFDOOHGVRPHBSOXJLQV7KLVWHFKQLTXHLVHVSHFLDOO useful if RXKDYHERWKWKLUG-
\050123\051

partDQGour own custom plugins stored locallDQGDOVRLIou’re creating new virtual
environments for continuous integration or with tox. (We’ll talk about both tox and continuous
integration in Chapter 7, ​
Using pytest with Other Tools ​.)
Note that with the local directorLQVWDOOPHWKRGou can install multiple versions and specify
which version RXZDQWE adding == and the version number:
​ ​$ ​​pip​​ ​​install ​​ ​​--no-index​​ ​​--find-links=./some_plugins/ ​​ ​​pWHVWFRY 0​
Install from a Git Repository
You can install plugins directlIURPD*LWUHSRVLWRU—in this case, GitHub: ​ ​$ ​​pip​​ ​​install ​​ ​​git+https://github.com/pWHVWGHYStest-cov​
You can also specifDYHUVLRQWDJ: ​ ​$ ​​pip​​ ​​install ​​ ​​git+https://github.com/pWHVWGHYStest-cov@v2.4.0​
Or RXFDQVSHFLI a branch: ​ ​$ ​​pip​​ ​​install ​​ ​​git+https://github.com/pWHVWGHYStest-cov@master ​
Installing from a Git repositorLVHVSHFLDOO useful if RXUHVWRULQJour own work within Git,
or if the plugin or plugin version RXZDQWLVQWRQ3PI.
\050124\051

Writing Your Own Plugins
ManWKLUGSDUW plugins contain quite a bit of code. That’s one of the reasons we use them—to
save us the time to develop all of that code ourselves. However, for RXUVSHFLILFFRGLQJGRPDLQ,
RXOOXQGRXEWHGO come up with special fixtures and modifications that help RXWHVW(YHQa
handful of fixtures that RXZDQWWRVKDUHEHWZHHQDFRXSOHRISURMHFWVFDQEHVKDUHGHDVLO by
creating a plugin. You can share those changes with multiple projects—and possiblWKHUHVWRf
the world—bGHYHORSLQJDQGGLVWULEXWLQJour own plugins. It’s prettHDV to do so. In this
section, we’ll develop a small modification to pWHVWEHKDYLRUSDFNDJHLWDVDSOXJLQWHVWLWDQd
look into how to distribute it.
Plugins can include hook functions that alter pWHVWVEHKDYLRU%HFDXVHStest was developed
with the intent to allow plugins to change quite a bit about the waStest behaves, a lot of hook
functions are available. The hook functions for pWHVWDUHVSHFLILHGRQWKHStest documentation
site.[11]
For our example, we’ll create a plugin that changes the waWKHWHVWVWDWXVORRNV:HOODOVo
include a command-line option to turn on this new behavior. We’re also going to add some text
to the output header. SpecificallZHOOFKDQJHDOORIWKH)$,/('VWDWXVLQGLFDWRUVWo
“OPPORTUNITY for improvement,” change F to O, and add “Thanks for running the tests” to
the header. We’ll use the --nice option to turn the behavior on.
To keep the behavior changes separate from the discussion of plugin mechanics, we’ll make our
changes in conftest.pEHIRUHWXUQLQJLWLQWRDGLVWULEXWDEOHSOXJLQ<RXGRQWKDYHWRVWDUt
plugins this wa%XWIUHTXHQWO, changes RXRQO intended to use on one project will become
useful enough to share and grow into a plugin. Therefore, we’ll start bDGGLQJIXQFWLRQDOLW to a
conftest.pILOHWKHQDIWHUZHJHWWKLQJVZRUNLQJLQFRQIWHVWS, we’ll move the code to a
package.
Let’s go back to the Tasks project. In ​
Expecting Exceptions ​, we wrote some tests that made sure
exceptions were raised if someone called an API function incorrectl/RRNVOLNHZHPLVVHGDt
least a few possible error conditions.
Here are a couple more tests:
ch5/a/tasks_proj/tests/func/test_api_exceptions.py
​ ​import​ pWHVt
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ @pWHVWPDUNXVHIL[WXUHV 'tasks_db'​)
​ ​class​ TestAdd():
​ ​"""Tests related to tasks.add()."""​

​ ​def​ test_missing_summar VHOI :
​ ​"""Should raise an exception if summary missing."""​
\050125\051

​ ​with​ pWHVWUDLVHV 9DOXH(UURU :
​ tasks.add(Task(owner=​'bob'​))

​ ​def​ test_done_not_bool(self):
​ ​"""Should raise an exception if done is not a bool."""​
​ ​with​ pWHVWUDLVHV 9DOXH(UURU :
​ tasks.add(Task(summar 'summary'​, done=​'True'​))
Let’s run them to see if theSDVV: ​ ​$ ​​cd​​ ​​/path/to/code/ch5/a/tasks_proj​
​ ​$ ​​pWHVt​
​ ===================== test session starts ======================
​ collected 57 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ............................
​ tests/func/test_add_varietS ............
​ tests/func/test_api_exceptions.p).
​ tests/func/test_unique_id.p.
​ tests/unit/test_task.p.

​ =========================== FAILURES ===========================
​ __________________ TestAdd.test_done_not_bool __________________

​ self =

​ def test_done_not_bool(self):
​ """Should raise an exception if done is not a bool."""
​ with pWHVWUDLVHV 9DOXH(UURU :
​ > tasks.add(Task(summar
VXPPDU', done='True'))
​ E Failed: DID NOT RAISE

​ tests/func/test_api_exceptions.p)DLOHd
​ ============= 1 failed, 56 passed in 0.28 seconds ==============
Let’s run it again with -v for verbose. Since RXYHDOUHDG seen the traceback, RXFDQWXUQWKDt
off with --tb=no.
And now let’s focus on the new tests with -k TestAdd, which works because there aren’t any
other tests with names that contain “TestAdd.” ​ ​$ ​​cd​​ ​​/path/to/code/ch5/a/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd​
\050126\051

​ ===================== test session starts ======================
​ collected 9 items

​ test_api_exceptions.p7HVW$GGWHVWBPLVVLQJBVXPPDU PASSED
​ test_api_exceptions.p7HVW$GGWHVWBGRQHBQRWBERRO)$,/(D

​ ====================== 7 tests deselected ======================
​ ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======
We could go off and trWRIL[WKLVWHVW DQGZHVKRXOGODWHU EXWQRZZHDUHIRFXVHGRQWUing to
make failures more pleasant for developers.
Let’s start bDGGLQJWKHWKDQNou” message to the header, which RXFDQGRZLWKDStest
hook called pWHVWBUHSRUWBKHDGHU .
ch5/b/tasks_proj/tests/conftest.py
​ ​def​ pWHVWBUHSRUWBKHDGHU :
​ ​"""Thank tester for running tests."""​
​ ​return​ ​"Thanks for running the tests."​
ObviouslSULQWLQJDWKDQNou message is rather sill+RZHYHUWKHDELOLW to add information
to the header can be extended to add a username and specifKDUGZDUHXVHGDQGYHUVLRQVXQGHr
test. ReallDQthing RXFDQFRQYHUWWRDVWULQJou can stuff into the test header.
Next, we’ll change the status reporting for tests to change F to O and FAILED to
OPPORTUNITY for improvement. There’s a hook function that allows for this tSHRf
shenanigans: pWHVWBUHSRUWBWHVWVWDWXV :
ch5/b/tasks_proj/tests/conftest.py
​ ​def​ pWHVWBUHSRUWBWHVWVWDWXV UHSRUW :
​ ​"""Turn failures into opportunities."""​
​ ​if​ report.when == ​'call'​ ​and​ report.failed:
​ ​return​ (report.outcome, ​'O'​, ​'OPPORTUNITY for improvement'​)
And now we have just the output we were looking for. A test session with no --verbose flag
shows an O for failures, er, improvement opportunities: ​ ​$ ​​cd​​ ​​/path/to/code/ch5/b/tasks_proj/tests/func ​
​ ​$ ​​pWHVt​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd​
​ ===================== test session starts ======================
​ Thanks for running the tests.
​ collected 9 items

​ test_api_exceptions.pO

​ ====================== 7 tests deselected ======================
\050127\051

​ ======= 1 failed, 1 passed, 7 deselected in 0.06 seconds =======
And the -v or --verbose flag will be nicer also:
​ ​$ ​​pWHVt​​ ​​-v​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd​
​ ===================== test session starts ======================
​ Thanks for running the tests.
​ collected 9 items

​ test_api_exceptions.p7HVW$GGWHVWBPLVVLQJBVXPPDU PASSED
​ test_api_exceptions.p7HVW$GGWHVWBGRQHBQRWBERRO23325781,7<IRULPSURYHPHQt

​ ====================== 7 tests deselected ======================
​ ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======
The last modification we’ll make is to add a command-line option, --nice, to onlKDYHRXUVWDWXs
modifications occur if --nice is passed in:
ch5/c/tasks_proj/tests/conftest.py
​ ​def​ pWHVWBDGGRSWLRQ SDUVHU :
​ ​"""Turn nice features on with --nice option."""​
​ group = parser.getgroup(​'nice'​)
​ group.addoption(​"--nice"​, action=​"store_true"​,
​ help=​"nice: turn failures into opportunities"​)


​ ​def​ pWHVWBUHSRUWBKHDGHU :
​ ​"""Thank tester for running tests."""​
​ ​if​ pWHVWFRQILJJHWRSWLRQ 'nice'​):
​ ​return​ ​"Thanks for running the tests."​


​ ​def​ pWHVWBUHSRUWBWHVWVWDWXV UHSRUW :
​ ​"""Turn failures into opportunities."""​
​ ​if​ report.when == ​'call'​:
​ ​if​ report.failed ​and​ pWHVWFRQILJJHWRSWLRQ 'nice'​):
​ ​return​ (report.outcome, ​'O'​, ​'OPPORTUNITY for improvement'​)
This is a good place to note that for this plugin, we are using just a couple of hook functions.
There are manPRUHZKLFKFDQEHIRXQGRQWKHPDLQStest documentation site. [12]
We can manuallWHVWRXUSOXJLQMXVWE running it against our example file. First, with no --nice
option, to make sure just the username shows up:
​ ​$ ​​cd​​ ​​/path/to/code/ch5/c/tasks_proj/tests/func ​
\050128\051

​ ​$ ​​pWHVt​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd​
​ ===================== test session starts ======================
​ collected 9 items

​ test_api_exceptions.pF

​ ====================== 7 tests deselected ======================
​ ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======
Now with --nice: ​ ​$ ​​pWHVt​​ ​​--nice ​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd​
​ ===================== test session starts ======================
​ Thanks for running the tests.
​ collected 9 items

​ test_api_exceptions.pO

​ ====================== 7 tests deselected ======================
​ ======= 1 failed, 1 passed, 7 deselected in 0.07 seconds =======
And with --nice and --verbose: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​--nice ​​ ​​--tb=no​​ ​​test_api_exceptions.py​​ ​​-k​​ ​​TestAdd​
​ ===================== test session starts ======================
​ Thanks for running the tests.
​ collected 9 items

​ test_api_exceptions.p7HVW$GGWHVWBPLVVLQJBVXPPDU PASSED
​ test_api_exceptions.p7HVW$GGWHVWBGRQHBQRWBERRO23325781,7<IRULPSURYHPHQt

​ ====================== 7 tests deselected ======================
​ ======= 1 failed, 1 passed, 7 deselected in 0.06 seconds =======
Great! All of the changes we wanted are done with about a dozen lines of code in a conftest.py
file. Next, we’ll move this code into a plugin structure.
\050129\051

Creating an Installable Plugin
The process for sharing plugins with others is well-defined. Even if RXQHYHUSXWour own
plugin up on P3,E walking through the process, RXOOKDYHDQHDVLHUWLPHUHDGLQJWKHFRGe
from open source plugins and be better equipped to judge if theZLOOKHOSou or not.
It would be overkill to fullFRYHU3thon packaging and distribution in this book, as the topic is
well documented elsewhere.[13]
[14] However, it’s a small task to go from the local config plugin
we created in the previous section to something pip-installable.
First, we need to create a new directorWRSXWRXUSOXJLQFRGH,WGRHVQRWPDWWHUZKDWou call
it, but since we are making a plugin for the “nice” flag, let’s call it pWHVWQLFH:HZLOOKDYHWZo
files in this new directorStest_nice.pDQGVHWXSS. (The tests directorZLOOEHGLVFXVVHGLn
Testing Plugins
​.)
​ pWHVWQLFe
​ ├── LICENCE
​ ├── README.rst
​ ├── pWHVWBQLFHSy
​ ├── setup.py
​ └── tests
​ ├── conftest.py
​ └── test_nice.py
In pWHVWBQLFHS, we’ll put the exact contents of our conftest.pWKDWZHUHUHODWHGWRWKLVIHDWXUe
(and take it out of the tasks_proj/tests/conftest.p :
ch5/pWHVWQLFHStest_nice.py
​ ​"""Code for pytest-nice plugin."""​

​ ​import​ pWHVt


​ ​def​ pWHVWBDGGRSWLRQ SDUVHU :
​ ​"""Turn nice features on with --nice option."""​
​ group = parser.getgroup(​'nice'​)
​ group.addoption(​"--nice"​, action=​"store_true"​,
​ help=​"nice: turn FAILED into OPPORTUNITY for improvement"​)


​ ​def​ pWHVWBUHSRUWBKHDGHU :
​ ​"""Thank tester for running tests."""​
​ ​if​ pWHVWFRQILJJHWRSWLRQ 'nice'​):
​ ​return​ ​"Thanks for running the tests."​
\050130\051



​ ​def​ pWHVWBUHSRUWBWHVWVWDWXV UHSRUW :
​ ​"""Turn failures into opportunities."""​
​ ​if​ report.when == ​'call'​:
​ ​if​ report.failed ​and​ pWHVWFRQILJJHWRSWLRQ 'nice'​):
​ ​return​ (report.outcome, ​'O'​, ​'OPPORTUNITY for improvement'​)
In setup.pZHQHHGDYHU minimal call to setup():
ch5/pWHVWQLFHVHWXSSy
​ ​"""Setup for pytest-nice plugin."""​

​ ​from​ setuptools ​import​ setup

​ setup(
​ name=​'pytest-nice'​,
​ version=​'0.1.0'​,
​ description=​'A pytest plugin to turn FAILURE into OPPORTUNITY'​,
​ url=​'https://wherever/you/have/info/on/this/package'​,
​ author=​'Your Name'​,
​ author_email=​'your_email@somewhere.com'​,
​ license=​'proprietary'​,
​ pBPRGXOHV >'pytest_nice'​],
​ install_requires=[​'pytest'​],
​ entrBSRLQWV ^'pytest11'​: [​'nice = pytest_nice'​, ], },
​ )
You’ll want more information in RXUVHWXSLIou’re going to distribute to a wide audience or
online. However, for a small team or just for RXUVHOIWKLVZLOOVXIILFH.
You can include manPRUHSDUDPHWHUVWRVHWXS ZHRQO have the required fields. The version
field is the version of this plugin. And it’s up to RXZKHQou bump the version. The url field is
required. You can leave it out, but RXJHWDZDUQLQJLIou do. The author and author_email
fields can be replaced with maintainer and maintainer_email, but one of those pairs needs to be
there. The license field is a short text field. It can be one of the manRSHQVRXUFHOLFHQVHVour
name or companRUZKDWHYHULVDSSURSULDWHIRUou. The pBPRGXOHVHQWU lists pWHVWBQLFHDs
our one and onlPRGXOHIRUWKLVSOXJLQ$OWKRXJKLWVDOLVWDQGou could include more than one
module, if I had more than one, I’d use packages instead and put all the modules inside a
director.
So far, all of the parameters to setup() are standard and used for all PWKRQLQVWDOOHUV7KHSLHFe
that is different for pWHVWSOXJLQVLVWKHHQWU_points parameter. We have listed entrBSRLQWV=
{’pWHVW>QLFH Stest_nice’, ], },. The entrBSRLQWVIHDWXUHLVVWDQGDUGIRUVHWXSWRROVEXt
pWHVWLVDVSHFLDOLGHQWLILHUWKDWStest looks for. With this line, we are telling pWHVWWKDWQLFe
is the name of our plugin, and pWHVWBQLFHLVWKHQDPHRIWKHPRGXOHZKHUHRXUSOXJLQOLYHV,f
\050131\051

we had used a package, our entrKHUHZRXOGEH:​ entrBSRLQWV ^'pytest11'​: [​'name_of_plugin = myproject.pluginmodule'​,], },
I haven’t talked about the README.rst file HW6RPHIRUPRI5($'0(LVDUHTXLUHPHQWEy
setuptools. If RXOHDYHLWRXWou’ll get this: ​ ​...​
​ warning: sdist: standard file not found: should have one of README,
​ README.rst, README.txt
​ ​...​
Keeping a README around as a standard waWRLQFOXGHVRPHLQIRUPDWLRQDERXWDSURMHFWLVa
good idea anZD. Here’s what I’ve put in the file for pWHVWQLFH:
ch5/pWHVWQLFH5($'0(UVt
​ pWHVWQLFH$Stest plugin
​ =============================

​ Makes pWHVWRXWSXWMXVWDELWQLFHUGXULQJIDLOXUHV.

​ Features
​ --------

​ - Includes user name of person running tests in pWHVWRXWSXW.
​ - Adds ``--nice`` option that:

​ - turns ``F`` to ``O``
​ - with ``-v``, turns ``FAILURE`` to ``OPPORTUNITY for improvement``

​ Installation
​ ------------

​ Given that our pWHVWSOXJLQVDUHEHLQJVDYHGLQWDUJ]IRUPLQWKe
​ shared director3$7+WKHQLQVWDOOOLNHWKLV:

​ ::

​ $ pip install PATH/pWHVWQLFHWDUJz
​ $ pip install --no-index --find-links PATH pWHVWQLFe

​ Usage
​ -----

\050132\051

​ ::

​ $ pWHVWQLFe
There are lots of opinions about what should be in a README. This is a rather minimal version,
but it works.
\050133\051

Testing Plugins
Plugins are code that needs to be tested just like anRWKHUFRGH+RZHYHUWHVWLQJDFKDQJHWRa
testing tool is a little trick:KHQZHGHYHORSHGWKHSOXJLQFRGHLQ ​
Writing Your Own Plugins ​,
we tested it manuallE using a sample test file, running pWHVWDJDLQVWLWDQGORRNLQJDWWKe
output to make sure it was right. We can do the same thing in an automated waXVLQJDSOXJLn
called pWHVWHUWKDWVKLSVZLWKStest but is disabled bGHIDXOW.
Our test directorIRUStest-nice has two files: conftest.pDQGWHVWBQLFHS. To use pWHVWHUZe
need to add just one line to conftest.p:
ch5/pWHVWQLFHWHVWVFRQIWHVWSy
​ ​"""pytester is needed for testing plugins."""​
​ pWHVWBSOXJLQV 'pytester'​
This turns on the pWHVWHUSOXJLQ:HZLOOEHXVLQJDIL[WXUHFDOOHGWHVWGLUWKDWEHFRPHVDYDLODEOe
when pWHVWHULVHQDEOHG.
Often, tests for plugins take on the form we’ve described in manual steps: 1. Make an example test file.
2. Run pWHVWZLWKRUZLWKRXWVRPHRSWLRQVLQWKHGLUHFWRU that contains our example file.
3. Examine the output.
4. PossiblFKHFNWKHUHVXOWFRGHIRUDOOSDVVLQJIRUVRPHIDLOLQJ.
Let’s look at one example:
ch5/pWHVWQLFHWHVWVWHVWBQLFHSy
​ ​def​ test_pass_fail(testdir):

​ ​# create a temporary pytest test module ​
​ testdir.makepILOH """​
​ ​ def test_pass():​
​ ​ assert 1 == 1​

​ ​ def test_fail():​
​ ​ assert 1 == 2​
​ ​ """​)

​ ​# run pytest ​
​ result = testdir.runpWHVW )

​ ​# fnmatch_lines does an assertion internally ​
​ result.stdout.fnmatch_lines([
​ ​'*.F'​, ​# . for Pass, F for Fail ​
\050134\051

​ ])

​ ​# make sure that that we get a '1' exit code for the testsuite​
​ ​assert​ result.ret == 1
The testdir fixture automaticallFUHDWHVDWHPSRUDU directorIRUXVWRSXWWHVWILOHV,WKDVa
method called makepILOH WKDWDOORZVXVWRSXWLQWKHFRQWHQWVRIDWHVWILOH,QWKLVFDVHZHDUe
creating two tests: one that passes and one that fails.
We run pWHVWDJDLQVWWKHQHZWHVWILOHZLWKWHVWGLUUXQStest(). You can pass in options if Ru
want. The return value can then be examined further, and is of tSH5XQ5HVXOW. [15]
Usuall,ORRNDWVWGRXWDQGUHW)RUFKHFNLQJWKHRXWSXWOLNHZHGLGPDQXDOO, use
fnmatch_lines, passing in a list of strings that we want to see in the output, and then making sure
that ret is 0 for passing sessions and 1 for failing sessions. The strings passed into fnmatch_lines
can include glob wildcards. We can use our example file for more tests. Instead of duplicating
that code, let’s make a fixture:
ch5/pWHVWQLFHWHVWVWHVWBQLFHSy
​ @pWHVWIL[WXUH )
​ ​def​ sample_test(testdir):
​ testdir.makepILOH """​
​ ​ def test_pass():​
​ ​ assert 1 == 1​

​ ​ def test_fail():​
​ ​ assert 1 == 2​
​ ​ """​)
​ ​return​ testdir
Now, for the rest of the tests, we can use sample_test as a directorWKDWDOUHDG contains our
sample test file. Here are the tests for the other option variants:
ch5/pWHVWQLFHWHVWVWHVWBQLFHSy
​ ​def​ test_with_nice(sample_test):
​ result = sample_test.runpWHVW '--nice'​)
​ result.stdout.fnmatch_lines([​'*.O'​, ]) ​# . for Pass, O for Fail ​
​ ​assert​ result.ret == 1


​ ​def​ test_with_nice_verbose(sample_test):
​ result = sample_test.runpWHVW '-v'​, ​'--nice'​)
​ result.stdout.fnmatch_lines([
​ ​'*::test_fail OPPORTUNITY for improvement'​,
​ ])
​ ​assert​ result.ret == 1
\050135\051



​ ​def​ test_not_nice_verbose(sample_test):
​ result = sample_test.runpWHVW '-v'​)
​ result.stdout.fnmatch_lines([​'*::test_fail FAILED'​])
​ ​assert​ result.ret == 1
Just a couple more tests to write. Let’s make sure our thank-RXPHVVDJHLVLQWKHKHDGHU:
ch5/pWHVWQLFHWHVWVWHVWBQLFHSy
​ ​def​ test_header(sample_test):
​ result = sample_test.runpWHVW '--nice'​)
​ result.stdout.fnmatch_lines([​'Thanks for running the tests.'​])


​ ​def​ test_header_not_nice(sample_test):
​ result = sample_test.runpWHVW )
​ thanks_message = ​'Thanks for running the tests.'​
​ ​assert​ thanks_message ​not​ ​in​ result.stdout.str()
This could have been part of the other tests also, but I like to have it in a separate test so that one
test checks one thing.
FinallOHWVFKHFNWKHKHOSWH[W:
ch5/pWHVWQLFHWHVWVWHVWBQLFHSy
​ ​def​ test_help_message(testdir):
​ result = testdir.runpWHVW '--help'​)

​ ​# fnmatch_lines does an assertion internally ​
​ result.stdout.fnmatch_lines([
​ ​'nice:'​,
​ ​'*--nice*nice: turn FAILED into OPPORTUNITY for improvement'​,
​ ])
I think that’s a prettJRRGFKHFNWRPDNHVXUHRXUSOXJLQZRUNV.
To run the tests, let’s start in our pWHVWQLFHGLUHFWRU and make sure our plugin is installed. We
do this either bLQVWDOOLQJWKH]LSJ]ILOHRULQVWDOOLQJWKHFXUUHQWGLUHFWRU in editable mode: ​ ​$ ​​cd​​ ​​/path/to/code/ch5/pWHVWQLFH/ ​
​ ​$ ​​pip​​ ​​install ​​ ​​.​
​ Processing /path/to/code/ch5/pWHVWQLFe
​ Requirement alreadVDWLVILHGStest in
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHV IURPStest-nice==0.1.0)
\050136\051

​ Requirement alreadVDWLVILHGS>=1.4.33 in
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHV IURPStest->pWHVWQLFH )
​ Requirement alreadVDWLVILHGVHWXSWRROVLn
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHV IURPStest->pWHVWQLFH )
​ Building wheels for collected packages: pWHVWQLFe
​ Running setup.pEGLVWBZKHHOIRUStest-nice ... done
​ ​ ...​
​ SuccessfullEXLOWStest-nice
​ Installing collected packages: pWHVWQLFe
​ SuccessfullLQVWDOOHGStest-nice-0.1.0
Now that it’s installed, let’s run the tests: ​ ​$ ​​pWHVt​​ ​​-v​
​ ===================== test session starts ======================
​ plugins: nice-0.1.0
​ collected 7 items

​ tests/test_nice.pWHVWBSDVVBIDLO3$66(D
​ tests/test_nice.pWHVWBZLWKBQLFH3$66(D
​ tests/test_nice.pWHVWBZLWKBQLFHBYHUERVH3$66(D
​ tests/test_nice.pWHVWBQRWBQLFHBYHUERVH3$66(D
​ tests/test_nice.pWHVWBKHDGHU3$66(D
​ tests/test_nice.pWHVWBKHDGHUBQRWBQLFH3$66(D
​ tests/test_nice.pWHVWBKHOSBPHVVDJH3$66(D

​ =================== 7 passed in 0.34 seconds ===================
Ya$OOWKHWHVWVSDVV:HFDQXQLQVWDOOLWMXVWOLNHDQ other PWKRQSDFNDJHRUStest plugin: ​ ​$ ​​pip​​ ​​uninstall ​​ ​​pWHVWQLFe​
​ Uninstalling pWHVWQLFH:
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHVStest-nice.egg-link
​ ​ ...​
​ Proceed (Q "y
​ SuccessfullXQLQVWDOOHGStest-nice-0.1.0
A great waWROHDUQPRUHDERXWSOXJLQWHVWLQJLVWRORRNDWWKHWHVWVFRQWDLQHGLQRWKHUStest
plugins available through P3,.
\050137\051

Creating a Distribution
Believe it or not, we are almost done with our plugin. From the command line, we can use this
setup.pILOHWRFUHDWHDGLVWULEXWLRQ:​ ​$ ​​cd​​ ​​/path/to/code/ch5/pWHVWQLFe ​
​ ​$ ​​pWKRn​​ ​​setup.py​​ ​​sdist​
​ running sdist
​ running egg_info
​ creating pWHVWBQLFHHJJLQIo
​ ​...​
​ running check
​ creating pWHVWQLFH0
​ ​...​
​ creating dist
​ Creating tar archive
​ ​...​
​ ​$ ​​ls​​ ​​dist​
​ pWHVWQLFHWDUJz
(Note that sdist stands for “source distribution.”)
Within pWHVWQLFHDGLVWGLUHFWRU contains a new file called pWHVWQLFHWDUJ]7KLVILOe
can now be used anZKHUHWRLQVWDOORXUSOXJLQHYHQLQSODFH: ​ ​$ ​​pip​​ ​​install ​​ ​​dist/pWHVWQLFHWDUJz ​
​ Processing ./dist/pWHVWQLFHWDUJz
​ ​...​
​ Installing collected packages: pWHVWQLFe
​ SuccessfullLQVWDOOHGStest-nice-0.1.0
However, RXFDQSXWour .tar.gz files anZKHUHou’ll be able to get at them to use and share.
Distributing Plugins Through a Shared Directory
pip alreadVXSSRUWVLQVWDOOLQJSDFNDJHVIURPVKDUHGGLUHFWRULHVVRDOOZHKDYHWRGRWo
distribute our plugin through a shared directorLVSLFNDORFDWLRQZHFDQUHPHPEHUDQGSXWWKe
.tar.gz files for our plugins there. Let’s saZHSXWStest-nice-0.1.0.tar.gz into a directorFDOOHd
mSOXJLQV.
To install pWHVWQLFHIURPPplugins: ​ ​$ ​​pip​​ ​​install ​​ ​​--no-index​​ ​​--find-links​​ ​​mSOXJLQs​​ ​​pWHVWQLFe ​
\050138\051

The --no-index tells pip to not go out to P3,WRORRNIRUZKDWou want to install. The --find-
links mSOXJLQVWHOOV3PI to look in mSOXJLQVIRUSDFNDJHVWRLQVWDOO$QGRIFRXUVHStest-
nice is what we want to install.
If RXYHGRQHVRPHEXJIL[HVDQGWKHUHDUHQHZHUYHUVLRQVLQPplugins, RXFDQXSJUDGHEy
adding --upgrade:​ ​$ ​​pip​​ ​​install ​​ ​​--upgrade​​ ​​--no-index​​ ​​--find-links​​ ​​mSOXJLQs​​ ​​pWHVWQLFe ​
This is just like anRWKHUXVHRISLSEXWZLWKWKHQRLQGH[ILQGOLQNVPplugins added.
Distributing Plugins Through P3I
If RXZDQWWRVKDUHour plugin with the world, there are a few more steps we need to do.
ActuallWKHUHDUHTXLWHDIHZPRUHVWHSV+RZHYHUEHFDXVHWKLVERRNLVQWIRFXVHGRn
contributing to open source, I recommend checking out the thorough instruction found in the
PWKRQ3DFNDJLQJ8VHU*XLGH. [16]
When RXDUHFRQWULEXWLQJDStest plugin, another great place to start is bXVLQJWKe
cookiecutter-pWHVWSOXJLn[17]
:
​ ​$ ​​pip​​ ​​install ​​ ​​cookiecutter ​
​ ​$ ​​cookiecutter ​​ ​​https://github.com/pWHVWGHYFRRNLHFXWWHUStest-plugin​
This project first asks RXVRPHTXHVWLRQVDERXWour plugin. Then it creates a good directorIRr
RXWRH[SORUHDQGILOOLQZLWKour code. Walking through this is beRQGWKHVFRSHRIWKLVERRN;
however, please keep this project in mind. It is supported bFRUHStest folks, and theZLOl
make sure this project staVXSWRGDWH.
\050139\051

Exercises
In ch4/cache/test_slower.pWKHUHLVDQDXWRXVHIL[WXUHFDOOHGFKHFNBGXUDWLRQ <RXXVHGLWLn
the Chapter 4 exercises as well. Now, let’s make a plugin out of it.1. Create a directorQDPHGStest-slower that will hold the code for the new plugin, similarto the directorGHVFULEHGLQ ​
Creating an Installable Plugin ​.
2. Fill out all the files of the directorWRPDNHStest-slower an installable plugin.
3. Write some test code for the plugin.
4. Take a look at the PWKRQ3DFNDJH,QGHx [18]
and search for “pWHVW)LQGDStest plugin
that looks interesting to RX.
5. Install the plugin RXFKRVHDQGWU it out on Tasks tests.
\050140\051

What’s Next
You’ve used conftest.pDORWVRIDULQWKLVERRN7KHUHDUHDOVRFRQILJXUDWLRQILOHVWKDWDIIHFt
how pWHVWUXQVVXFKDVStest.ini. In the next chapter, RXOOUXQWKURXJKWKHGLIIHUHQt
configuration files and learn what RXFDQGRWKHUHWRPDNHour testing life easier.
Footnotes
[11]
http://doc.pWHVWRUJHQODWHVWBPRGXOHVBStest/hookspec.html
[12]
https://docs.pWHVWRUJHQODWHVWZULWLQJBSOXJLQVKWPl
[13]
http://pWKRQSDFNDJLQJUHDGWKHGRFVLo
[14]
https://www.pSDLo
[15]
https://docs.pWHVWRUJHQODWHVWZULWLQJBSOXJLQVKWPOBStest.pWHVWHU5XQ5HVXOt
[16]
https://packaging.pWKRQRUJGLVWULEXWLQg
[17]
https://github.com/pWHVWGHYFRRNLHFXWWHUStest-plugin
[18]
https://pSLSthon.org/pSi
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050141\051

Chapter 6
Configuration
So far in this book, I’ve talked about the various non-test files that affect pWHVWPRVWO in
passing, with the exception of conftest.pZKLFK,FRYHUHGTXLWHWKRURXJKO in Chapter 5,
Plugins
​. In this chapter, we’ll take a look at the configuration files that affect pWHVWGLVFXVVKRw
pWHVWFKDQJHVLWVEHKDYLRUEDVHGRQWKHPDQGPDNHVRPHFKDQJHVWRWKHFRQILJXUDWLRQILOHVRf
the Tasks project.
\050142\051

Understanding pWHVW&RQILJXUDWLRQ)LOHs
Before I discuss how RXFDQDOWHUStest’s default behavior, let’s run down all of the non-test
files in pWHVWDQGVSHFLILFDOO who should care about them. EverRQHVKRXOGNQRZDERXWWKHVH:
pWHVWLQL7KLVLVWKHSULPDU pWHVWFRQILJXUDWLRQILOHWKDWDOORZVou to change default
behavior. Since there are quite a few configuration changes RXFDQPDNHDELJFKXQNRf
this chapter is about the settings RXFDQPDNHLQStest.ini.
conftest.p7KLVLVDORFDOSOXJLQWRDOORZKRRNIXQFWLRQVDQGIL[WXUHVIRUWKHGLUHFWRUy
where the conftest.pILOHH[LVWVDQGDOOVXEGLUHFWRULHVFRQIWHVWS files are covered
Chapter 5, ​
Plugins ​.
__init__.p:KHQSXWLQWRHYHU test subdirectorWKLVILOHDOORZVou to have identical
test filenames in multiple test directories. We’ll look at an example of what can go wrong
without __init__.pILOHVLQWHVWGLUHFWRULHVLQ ​
Avoiding Filename Collisions ​.
If RXXVHWR[ou’ll be interested in:
tox.ini: This file is similar to pWHVWLQLEXWIRUWR[+RZHYHUou can put RXUStest
configuration here instead of having both a tox.ini and a pWHVWLQLILOHVDYLQJou one
configuration file. Tox is covered in Chapter 7, ​
Using pytest with Other Tools ​.
If RXZDQWWRGLVWULEXWHD3thon package (like Tasks), this file will be of interest:
setup.cfg: This is a file that’s also in ini file format and affects the behavior of setup.p.
It’s possible to add a couple of lines to setup.pWRDOORZou to run pWKRQVHWXSS test
and have it run all of RXUStest tests. If RXDUHGLVWULEXWLQJDSDFNDJHou maDOUHDGy
have a setup.cfg file, and RXFDQXVHWKDWILOHWRVWRUHStest configuration. You’ll see how
in Appendix 4, ​
Packaging and Distributing Python Projects ​.
Regardless of which file RXSXWour pWHVWFRQILJXUDWLRQLQWKHIRUPDWZLOOPRVWO be the
same.
For pWHVWLQL:
ch6/format/pWHVWLQi
​ [pWHVW]
​ addopts = ​-rsxX -l --tb=short --strict ​
​ xfail_strict = ​true ​
​ ​...​ ​more ​ ​options​ ​...​
For tox.ini:
ch6/format/tox.ini
​ ​...​ ​tox​ ​specific ​ ​stuff​ ​...​

​ [pWHVW]
\050143\051

​ addopts = ​-rsxX -l --tb=short --strict​
​ xfail_strict = ​true ​
​ ​...​ ​more ​ ​options​ ​...​
For setup.cfg:
ch6/format/setup.cfg
​ ... packaging specific stuff ...

​ [tool:pWHVW]
​ addopts = -rsxX -l --tb=short --strict
​ xfail_strict = true
​ ... more options ...
The onlGLIIHUHQFHLVWKDWWKHVHFWLRQKHDGHUIRUVHWXSFIJLV>WRROStest] instead of [pWHVW@.
List the Valid ini-file Options with pWHVWKHOp
You can get a list of all the valid settings for pWHVWLQLIURPStest --help: ​ ​$ ​​pWHVt​​ ​​--help​
​ ​...​
​ [pWHVW@LQLRSWLRQVLQWKHILUVWStest.ini|tox.ini|setup.cfg file found:

​ markers (linelist) markers for test functions
​ norecursedirs (args) directorSDWWHUQVWRDYRLGIRUUHFXUVLRn
​ testpaths (args) directories to search for tests when no files or
​ directories are given in the command line.
​ usefixtures (args) list of default fixtures to be used with this project
​ pWKRQBILOHV DUJV JOREVWle file patterns for PWKRQWHVWPRGXOHGLVFRYHUy
​ pWKRQBFODVVHV DUJV SUHIL[HVRUJOREQDPHVIRU3thon test class discovery
​ pWKRQBIXQFWLRQV DUJV SUHIL[HVRUJOREQDPHVIRU3thon test function and
​ method discovery
​ xfail_strict (bool) default for the strict parameter of xfail markers
​ when not given explicitl GHIDXOW)DOVH)
​ doctest_optionflags (args) option flags for doctests
​ addopts (args) extra command line options
​ minversion (string) minimallUHTXLUHGStest version
​ ​...​
You’ll look at all of these settings in this chapter, except doctest_optionflags, which is covered in
Chapter 7, ​
Using pytest with Other Tools ​.
Plugins Can Add ini-file Options
\050144\051

The previous settings list is not a constant. It is possible for plugins (and conftest.pILOHV WRDGd
ini file options. The added options will be added to the pWHVWKHOSRXWSXWDVZHOO.
Now, let’s explore some of the configuration changes we can make with the builtin ini file
settings available from core pWHVW.
\050145\051

Changing the Default Command-Line Options
You’ve used a lot of command-line options for pWHVWVRIDUOLNHYYHUERVHIRUYHUERVHRXWSXt
and -l/--showlocals to see local variables with the stack trace for failed tests. You maILQd
RXUVHOIDOZDs using some of those options—or preferring to use them—for a project. If RXVHt
addopts in pWHVWLQLWRWKHRSWLRQVou want, RXGRQWKDYHWRWpe them in anPRUH+HUHVa
set I like:​ [pWHVW]
​ addopts = ​-rsxX -l --tb=short --strict ​
The -rsxX tells pWHVWWRUHSRUWWKHUHDVRQVIRUDOOWHVWVWKDWVNLSSHG[IDLOHGRU[SDVVHG7KHl
tells pWHVWWRUHSRUWWKHORFDOYDULDEOHVIRUHYHU failure with the stacktrace. The --tb=short
removes a lot of the stack trace. It leaves the file and line number, though. The --strict option
disallows markers to be used if theDUHQWUHJLVWHUHGLQDFRQILJILOH<RXOOVHHKRZWRGRWKDWLn
the next section.
\050146\051

Registering Markers to Avoid Marker TSRs
Custom markers, as discussed in ​
Marking Test Functions ​, are great for allowing RXWRPDUNa
subset of tests to run with a specific marker. However, it’s too easWRPLVVSHOODPDUNHUDQGHQd
up having some tests marked with @pWHVWPDUNVPRNHDQGVRPHPDUNHGZLWh
@pWHVWPDUNVRPNH% default, this isn’t an error. pWHVWMXVWWKLQNVou created two markers.
This can be fixed, however, bUHJLVWHULQJPDUNHUVLQStest.ini, like this:
​ [pWHVW]
​ markers =
​ ​smoke:​ ​Run​ ​the ​ ​smoke​ ​test​ ​functions​ ​for ​ ​tasks​ ​project​
​ ​get:​ ​Run​ ​the ​ ​test​ ​functions​ ​that​ ​test​ ​tasks.get()​
With these markers registered, RXFDQQRZDOVRVHHWKHPZLWKStest --markers with their
descriptions: ​ ​$ ​​cd​​ ​​/path/to/code/ch6/b/tasks_proj/tests​
​ ​$ ​​pWHVt​​ ​​--markers​
​ @pWHVWPDUNVPRNH5XQWKHVPRNHWHVWWHVWIXQFWLRQs

​ @pWHVWPDUNJHW5XQWKHWHVWIXQFWLRQVWKDWWHVWWDVNVJHW )

​ @pWHVWPDUNVNLS UHDVRQ 1RQH VNLSWKH.

​ ​...​
If markers aren’t registered, theZRQWVKRZXSLQWKHPDUNHUVOLVW:LWKWKHPUHJLVWHUHGWKHy
show up in the list, and if RXXVHVWULFWDQ misspelled or unregistered markers show up as an
error. The onlGLIIHUHQFHEHWZHHQFKDWDVNVBSURMDQGFKEWDVNVBSURMLVWKHFRQWHQWVRIWKe
pWHVWLQLILOH,WVHPSW in ch6/a. Let’s trUXQQLQJWKHWHVWVZLWKRXWUHJLVWHULQJDQ markers: ​ ​$ ​​cd​​ ​​/path/to/code/ch6/a/tasks_proj/tests​
​ ​$ ​​pWHVt​​ ​​--strict​​ ​​--tb=line ​
​ ===================== test session starts ======================
​ collected 45 items / 2 errors

​ ============================ ERRORS ============================
​ ______________ ERROR collecting func/test_add.pBBBBBBBBBBBBBB_
​ func/test_add.pLQPRGXOH>
​ @pWHVWPDUNVPRNe
​ ​...​
​ E AttributeError: 'smoke' not a registered marker
​ _________ ERROR collecting func/test_api_exceptions.pBBBBBBBB_
\050147\051

​ func/test_api_exceptions.pLQPRGXOH>
​ @pWHVWPDUNVPRNe
​ ​...​
​ E AttributeError: 'smoke' not a registered marker
​ !!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!
​ =================== 2 error in 0.24 seconds ====================
If RXXVHPDUNHUVLQStest.ini to register RXUPDUNHUVou maDVZHOODGGVWULFWWRour
addopts while RXUHDWLW<RXOOWKDQNPHODWHU/HWVJRDKHDGDQGDGGDStest.ini file to the
tasks project:
ch6/b/tasks_proj/tests/pWHVWLQi
​ [pWHVW]
​ addopts = ​-rsxX -l --tb=short --strict ​
​ markers =
​ ​smoke:​ ​Run​ ​the ​ ​smoke​ ​test​ ​test​ ​functions​
​ ​get:​ ​Run​ ​the ​ ​test​ ​functions​ ​that​ ​test​ ​tasks.get()​
This has a combination of flags I prefer over the defaults: -rsxX to report which tests skipped,
xfailed, or xpassed, --tb=short for a shorter traceback for failures, and --strict to onlDOORw
declared markers. And then a list of markers to allow for the project.
This should allow us to run tests, including the smoke tests: ​ ​$ ​​cd​​ ​​/path/to/code/ch6/b/tasks_proj/tests​
​ ​$ ​​pWHVt​​ ​​--strict​​ ​​-m​​ ​​smoke ​
​ ===================== test session starts ======================
​ collected 57 items

​ func/test_add.p.
​ func/test_api_exceptions.p.

​ ===================== 54 tests deselected ======================
​ =========== 3 passed, 54 deselected in 0.06 seconds ============
\050148\051

Requiring a Minimum pWHVW9HUVLRn
The minversion setting enables RXWRVSHFLI a minimum pWHVWYHUVLRQou expect for RXr
tests. For instance, I like to use approx() when testing floating point numbers for “close enough”
equalitLQWHVWV%XWWKLVIHDWXUHGLGQWJHWLQWURGXFHGLQWRStest until version 3.0. To avoid
confusion, I add the following to projects that use approx():​ [pWHVW]
​ minversion = ​3.0​
This waLIVRPHRQHWULHVWRUXQWKHWHVWVXVLQJDQROGHUYHUVLRQRIStest, an error message
appears.
\050149\051

Stopping pWHVWIURP/RRNLQJLQWKH:URQJ3ODFHs
Did RXNQRZWKDWRQHRIWKHGHILQLWLRQVRIUHFXUVHLVWRVZHDUDWour code twice? Well, no.
But, it does mean to traverse subdirectories. In the case of pWHVWWHVWGLVFRYHU traverses many
directories recursivel%XWWKHUHDUHVRPHGLUHFWRULHVou just know RXGRQWZDQWStest
looking in.
The default setting for norecurse is ’.* build dist CVS _darcs {arch} and *.egg. Having ’.*’ is a
good reason to name RXUYLUWXDOHQYLURQPHQWYHQYEHFDXVHDOOGLUHFWRULHVVWDUWLQJZLWKDGRt
will not be traversed. However, I have a habit of naming it venv, so I could add that to
norecursedirs.
In the case of the Tasks project, RXFRXOGOLVWVUFLQWKHUHDOVREHFDXVHKDYLQJStest look for
test files there would just be a waste of time.​ [pWHVW]
​ norecursedirs = ​.* venv src *.egg dist build​
When overriding a setting that alreadKDVDXVHIXOYDOXHOLNHWKLVVHWWLQJLWVDJRRGLGHDWo
know what the defaults are and put the ones back RXFDUHDERXWDV,GLGLQWKHSUHYLRXVFRGe
with *.egg dist build.
The norecursedirs is kind of a corollarWRWHVWSDWKVVROHWVORRNDWWKDWQH[W.
\050150\051

SpecifLQJ7HVW'LUHFWRU Locations
Whereas norecursedirs tells pWHVWZKHUHQRWWRORRNWHVWSDWKVWHOOVStest where to look.
testspaths is a list of directories relative to the root directorWRORRNLQIRUWHVWV,WVRQO used if a
directorILOHRUQRGHLGLVQRWJLYHQDVDQDUJXPHQW.
Suppose for the Tasks project we put pWHVWLQLLQWKHWDVNVBSURMGLUHFWRU instead of under tests:​ tasks_proj/
​ ├── pWHVWLQi
​ ├── src
​ │ └── tasks
​ │ ├── api.py
​ │ └── ...
​ └── tests
​ ├── conftest.py
​ ├── func
​ │ ├── __init__.py
​ │ ├── test_add.py
​ │ ├── ...
​ └── unit
​ ├── __init__.py
​ ├── test_task.py
​ └── ...
It could then make sense to put tests in testpaths: ​ [pWHVW]
​ testpaths = ​tests​
Now, as long as RXVWDUWStest from the tasks_proj directorStest will onlORRNLn
tasks_proj/tests. MSUREOHPZLWKWKLVLVWKDW,RIWHQERXQFHDURXQGDWHVWGLUHFWRU during test
development and debugging, so I can easilWHVWDVXEGLUHFWRU or file without tSLQJRXWWKe
whole path. Therefore, for me, this setting doesn’t help much with interactive testing.
However, it’s great for tests launched from a continuous integration server or from tox. In those
cases, RXNQRZWKDWWKHURRWGLUHFWRU is going to be fixed, and RXFDQOLVWGLUHFWRULHVUHODWLYe
to that fixed root. These are also the cases where RXUHDOO want to squeeze RXUWHVWWLPHVVo
shaving a bit off of test discoverLVDZHVRPH.
At first glance, it might seem sillWRXVHERWKWHVWSDWKVDQGQRUHFXUVHGLUVDWWKHVDPHWLPH.
However, as RXYHVHHQWHVWVSDWKVGRHVQWKHOSPXFKZLWKLQWHUDFWLYHWHVWLQJIURPGLIIHUHQt
parts of the file sVWHP,QWKRVHFDVHVQRUHFXUVHGLUVFDQKHOS$OVRLIou have directories with
tests that don’t contain tests, RXFRXOGXVHQRUHFXUVHGLUVWRDYRLGWKRVH%XWUHDOO, what would
be the point of putting extra directories in tests that don’t have tests?
\050151\051

Changing Test Discover5XOHs
pWHVWILQGVWHVWVWRUXQEDVHGRQFHUWDLQWHVWGLVFRYHU rules. The standard test discoverUXOHs
are:
Start at one or more director<RXFDQVSHFLI filenames or directorQDPHVRQWKe
command line. If RXGRQWVSHFLI anWKLQJWKHFXUUHQWGLUHFWRU is used.
Look in the directorDQGDOOVXEGLUHFWRULHVUHFXUVLYHO for test modules.
A test module is a file with a name that looks like test_*.pRU
BWHVWS.
Look in test modules for functions that start with test_.
Look for classes that start with Test. Look for methods in those classes that start with test_
but don’t have an __init__ method.
These are the standard discoverUXOHVKRZHYHUou can change them.
pWKRQBFODVVHs
The usual test discoverUXOHIRUStest and classes is to consider a class a potential test class if it
starts with Test*. The class also can’t have an __init__() function. But what if we want to name
our test classes Test or Suite? That’s where pWKRQBFODVVHVFRPHVLQ:
​ [pWHVW]
​ pWKRQBFODVVHV *Test Test* *Suite ​
This enables us to name classes like this: ​ ​class​ DeleteSuite():
​ ​def​ test_delete_1():
​ ...

​ ​def​ test_delete_2():
​ ...

​ ....
pWKRQBILOHs
Like pWHVWBFODVVHVSthon_files modifies the default test discoverUXOHZKLFKLVWRORRNIRr
files that start with test_* or end in *_test.
Let’s saou have a custom test framework in which RXQDPHGDOORIour test files
check_.p6HHPVUHDVRQDEOH,QVWHDGRIUHQDPLQJDOORIour files, just add a line to
\050152\051

pWHVWLQLOLNHWKLV:​ [pWHVW]
​ pWKRQBILOHV test_* *_test check_*​
EasSHDV. Now RXFDQPLJUDWHour naming convention graduallLIou want to, or just leave
it as check_*.
pWKRQBIXQFWLRQs
pWKRQBIXQFWLRQVDFWVOLNHWKHSUHYLRXVWZRVHWWLQJVEXWIRUWHVWIXQFWLRQDQGPHWKRGQDPHV7Ke
default is test_*. To add check_*—RXJXHVVHGLWGRWKLV: ​ [pWHVW]
​ pWKRQBIXQFWLRQV test_* check_*​
Now the pWHVWQDPLQJFRQYHQWLRQVGRQWVHHPWKDWUHVWULFWLYHGRWKH? If RXGRQWOLNHWKe
default naming convention, just change it. However, I encourage RXWRKDYHDEHWWHUUHDVRQ.
Migrating hundreds of test files is definitelDJRRGUHDVRQ.
\050153\051

Disallowing XPASS
Setting xfail_strict = true causes tests marked with @pWHVWPDUN[IDLOWKDWGRQWIDLOWREe
reported as an error. I think this should alwaVEHVHW)RUPRUHLQIRUPDWLRQRQWKH[IDLOPDUNHU,
go to ​
Marking Tests as Expecting to Fail ​.
\050154\051

Avoiding Filename Collisions
The utilitRIKDYLQJBBLQLWBBS files in everWHVWVXEGLUHFWRU of a project confused me for a
long time. However, the difference between having these and not having these is simple. If Ru
have __init__.pILOHVLQDOORIour test subdirectories, RXFDQKDYHWKHVDPHWHVWILOHQDPHVKRw
up in multiple directories. If RXGRQWou can’t. That’s it. That’s the effect on RX.
Here’s an example. DirectorDDQGEERWKKDYHWKHILOHWHVWBIRRS. It doesn’t matter what these
files have in them, but for this example, theORRNOLNHWKLV:
ch6/dups/a/test_foo.py
​ ​def​ test_a():
​ ​pass​
ch6/dups/b/test_foo.py
​ ​def​ test_b():
​ ​pass​
With a directorVWUXFWXUHOLNHWKLV: ​ dups
​ ├── a
​ │ └── test_foo.py
​ └── b
​ └── test_foo.py
These files don’t even have the same content, but it’s still mucked up. Running them individually
will be fine, but running pWHVWIURPWKHGXSVGLUHFWRU won’t work: ​ ​$ ​​cd​​ ​​/path/to/code/ch6/dups​
​ ​$ ​​pWHVt​​ ​​a​
​ ================== test session starts ==================
​ collected 1 items

​ a/test_foo.p.

​ =============== 1 passed in 0.01 seconds ================
​ ​$ ​​pWHVt​​ ​​b​
​ ================== test session starts ==================
​ collected 1 items

​ b/test_foo.p.

​ =============== 1 passed in 0.01 seconds ================
\050155\051

​ ​$ ​​pWHVt​
​ ================== test session starts ==================
​ collected 1 items / 1 errors

​ ======================== ERRORS =========================
​ ____________ ERROR collecting b/test_foo.pBBBBBBBBBBBB_
​ import file mismatch:
​ imported module 'test_foo' has this __file__ attribute:
​ /path/to/code/ch6/dups/a/test_foo.py
​ which is not the same as the test file we want to collect:
​ /path/to/code/ch6/dups/b/test_foo.py
​ HINT: remove __pFDFKHBBSc files and/or use a unique basename
​ for RXUWHVWILOHPRGXOHs
​ !!!!!!!! Interrupted: 1 errors during collection !!!!!!!!
​ ================ 1 error in 0.15 seconds ================
That error message doesn’t reallPDNHLWFOHDUZKDWZHQWZURQJ.
To fix this test, just add emptBBLQLWBBS files in the subdirectories. Here, the example directory
dups_fixed is the same as dups, but with __init__.pILOHVDGGHG: ​ dups_fixed/
​ ├── a
​ │ ├── __init__.py
​ │ └── test_foo.py
​ └── b
​ ├── __init__.py
​ └── test_foo.py
Now, let’s trWKLVDJDLQIURPWKHWRSOHYHOLQGXSVBIL[HG: ​ ​$ ​​cd​​ ​​/path/to/code/ch6/dups_fixed​
​ ​$ ​​pWHVt​
​ ================== test session starts ==================
​ collected 2 items

​ a/test_foo.p.
​ b/test_foo.p.

​ =============== 2 passed in 0.01 seconds ================
There, all better. You might saWRourself that RXOOQHYHUKDYHGXSOLFDWHILOHQDPHVVRLt
doesn’t matter. That’s fine. But projects grow and test directories grow, and do RXUHDOO want
to wait until it happens to RXEHIRUHou fix it? I saMXVWSXWWKRVHILOHVLQWKHUHDVDKDELWDQd
\050156\051

don’t worrDERXWLWDJDLQ.
\050157\051

Exercises
In Chapter 5, ​
Plugins ​, RXFUHDWHGDSOXJLQFDOOHGStest-nice that included a --nice command-
line option. Let’s extend that to include a pWHVWLQLRSWLRQFDOOHGQLFH.
1. Add the following line to the pWHVWBDGGRSWLRQKRRNIXQFWLRQLQStest_nice.p:parser.addini(’nice’, tSH ERROKHOS 7XUQIDLOXUHVLQWRRSSRUWXQLWLHV)
2. The places in the plugin that use getoption() will have to also call getini(’nice’). Make those changes.
3. ManuallWHVWWKLVE adding nice to a pWHVWLQLILOH.
4. Don’t forget the plugin tests. Add a test to verifWKDWWKHVHWWLQJQLFHIURPStest.ini works correctl.
5. Add the tests to the plugin tests director<RXOOQHHGWRORRNXSVRPHH[WUDStester functionalit. [19]
\050158\051

What’s Next
While pWHVWLVH[WUHPHO powerful on its own—especiallVRZLWKSOXJLQVLWDOVRLQWHJUDWHs
well with other software development and software testing tools. In the next chapter, RXOOORRk
at using pWHVWLQFRQMXQFWLRQZLWKRWKHUSRZHUIXOWHVWLQJWRROV.
Footnotes
[19]
https://docs.pWHVWRUJHQODWHVWBPRGXOHVBStest/pWHVWHUKWPO7HVWGLr
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050159\051

Chapter 7
Using pWHVWZLWK2WKHU7RROs
You don’t usuallXVHStest on its own, but rather in a testing environment with other tools. This
chapter looks at other tools that are often used in combination with pWHVWIRUHIIHFWLYHDQd
efficient testing. While this is bQRPHDQVDQH[KDXVWLYHOLVWWKHWRROVGLVFXVVHGKHUHJLYHou a
taste of the power of mixing pWHVWZLWKRWKHUWRROV.
\050160\051

pdb: Debugging Test Failures
The pdb module is the PWKRQGHEXJJHULQWKHVWDQGDUGOLEUDU. You use --pdb to have pWHVt
start a debugging session at the point of failure. Let’s look at pdb in action in the context of the
Tasks project.
In ​
Parametrizing Fixtures ​, we left the Tasks project with a few failures:
​ ​$ ​​cd​​ ​​/path/to/code/ch3/c/tasks_proj​
​ ​$ ​​pWHVt​​ ​​--tb=no​​ ​​-q​
​ .........................................FF.FFFF
​ FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF.FFF...........
​ 42 failed, 54 passed in 4.74 seconds
Before we look at how pdb can help us debug this test, let’s take a look at the pWHVWRSWLRQs
available to help speed up debugging test failures, which we first looked at in ​
Using Options ​:
--tb=[auto/long/short/line/native/no]: Controls the traceback stOH.
-v / --verbose: DisplaVDOOWKHWHVWQDPHVSDVVLQJRUIDLOLQJ.
-l / --showlocals: DisplaVORFDOYDULDEOHVDORQJVLGHWKHVWDFNWUDFH.
-lf / --last-failed: Runs just the tests that failed last.
-x / --exitfirst: Stops the tests session with the first failure.
--pdb: Starts an interactive debugging session at the point of failure.
Installing MongoDB
As mentioned in Chapter 3, ​ pytest Fixtures ​, running the MongoDB tests requires
installing MongoDB and pPRQJR,YHEHHQWHVWLQJZLWKWKH&RPPXQLW Server
edition found at https://www.mongodb.com/download-center
. pPRQJRLVLQVWDOOHd
with pip: pip install pPRQJR+RZHYHUWKLVLVWKHODVWH[DPSOHLQWKHERRNWKDt
uses MongoDB. To trRXWWKHGHEXJJHUZLWKRXWXVLQJ0RQJR'%ou could run
the pWHVWFRPPDQGVIURPFRGHFKDVWKLVGLUHFWRU also contains a few failing
tests.
We just ran the tests from code/ch3/c to see that some of them were failing. We didn’t see the
tracebacks or the test names because --tb=no turns off tracebacks, and we didn’t have --verbose
turned on. Let’s re-run the failures (at most three of them) with verbose: ​ ​$ ​​pWHVt​​ ​​--tb=no​​ ​​--verbose ​​ ​​--lf​​ ​​--maxfail=3​
​ ===================== test session starts ======================
​ run-last-failure: rerun last 42 failures
​ collected 96 items

​ tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG>PRQJR@)$,/(D
\050161\051

​ tests/func/test_add.pWHVWBDGGHGBWDVNBKDVBLGBVHW>PRQJR@)$,/(D
​ tests/func/test_add_varietS::test_add_1[mongo] FAILED

​ !!!!!!!!!!!! Interrupted: stopping after 3 failures !!!!!!!!!!!!
​ ===================== 54 tests deselected ======================
​ =========== 3 failed, 54 deselected in 3.14 seconds ============
Now we know which tests are failing. Let’s look at just one of them bXVLQJ[LQFOXGLQJWKe
traceback bQRWXVLQJWE QRDQGVKRZLQJWKHORFDOYDULDEOHVZLWKO: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​--lf​​ ​​-l ​​ ​​-x​
​ ===================== test session starts ======================
​ run-last-failure: rerun last 42 failures
​ collected 96 items

​ tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG>PRQJR@)$,/(D

​ =========================== FAILURES ===========================
​ _______________ test_add_returns_valid_id[mongo] _______________

​ tasks_db = None

​ def test_add_returns_valid_id(tasks_db):
​ """tasks.add() should return an integer."""
​ ​ # GIVEN an initialized tasks db​
​ ​ # WHEN a new task is added​
​ ​ # THEN returned task_id is of type int ​
​ new_task = Task('do something')
​ task_id = tasks.add(new_task)
​ > assert isinstance(task_id, int)
​ E AssertionError: assert False
​ E + where False = isinstance(ObjectId('59783baf8204177f24cb1b68'), int)

​ new_task = Task(summar
GRVRPHWKLQJ
RZQHU 1RQHGRQH )DOVHLG 1RQH)
​ task_id = ObjectId('59783baf8204177f24cb1b68')
​ tasks_db = None

​ tests/func/test_add.p$VVHUWLRQ(UURr
​ !!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!
​ ===================== 54 tests deselected ======================
​ =========== 1 failed, 54 deselected in 2.47 seconds ============
Quite often this is enough to understand the test failure. In this particular case, it’s prettFOHDr
\050162\051

that task_id is not an integer—it’s an instance of ObjectId. ObjectId is a tSHXVHGE MongoDB
for object identifiers within the database. MLQWHQWLRQZLWKWKHWDVNVGEBSmongo.pODer was
to hide particular details of the MongoDB implementation from the rest of the sVWHP&OHDUO, in
this case, it didn’t work.
However, we want to see how to use pdb with pWHVWVROHWVSUHWHQGWKDWLWZDVQWREYLRXVZKy
this test failed. We can have pWHVWVWDUWDGHEXJJLQJVHVVLRQDQGVWDUWXVULJKWDWWKHSRLQWRf
failure with --pdb:​ ​$ ​​pWHVt​​ ​​-v​​ ​​--lf​​ ​​-x​​ ​​--pdb​
​ ===================== test session starts ======================
​ run-last-failure: rerun last 42 failures
​ collected 96 items

​ tests/func/test_add.pWHVWBDGGBUHWXUQVBYDOLGBLG>PRQJR@)$,/(D
​ ​>>>​​>>>>>>>>>>>>>>>>>>>>>>> ​​ ​​traceback​​ ​​>>>>>>>>>>>>>>>>>>>>>>>>>>> ​

​ tasks_db = None

​ def test_add_returns_valid_id(tasks_db):
​ """tasks.add() should return an integer."""
​ ​ # GIVEN an initialized tasks db​
​ ​ # WHEN a new task is added​
​ ​ # THEN returned task_id is of type int ​
​ new_task = Task('do something')
​ task_id = tasks.add(new_task)
​ > assert isinstance(task_id, int)
​ E AssertionError: assert False
​ E + where False = isinstance(ObjectId('59783bf48204177f2a786893'), int)

​ tests/func/test_add.p$VVHUWLRQ(UURr
​ ​>>>​​>>>>>>>>>>>>>>>>>>>>>> ​​ ​​entering​​ ​​PDB​​ ​​>>>>>>>>>>>>>>>>>>>>>>>>> ​
​ > /path/to/code/ch3/c/tasks_proj/tests/func/test_add.p )
​ > test_add_returns_valid_id()
​ ​->​​ ​​assert​​ ​​isinstance(task_id,​​ ​​int)​
​ (Pdb)
Now that we are at the (Pdb) prompt, we have access to all of the interactive debugging features
of pdb. When looking at failures, I regularlXVHWKHVHFRPPDQGV:
p/print expr: Prints the value of exp.
pp expr: PrettSULQWVWKHYDOXHRIH[SU.
l/list: Lists the point of failure and five lines of code above and below.
\050163\051

l/list begin,end: Lists specific line numbers.
a/args: Prints the arguments of the current function with their values. (This is helpful when
in a test helper function.)
u/up: Moves up one level in the stack trace.
d/down: Moves down one level in the stack trace.
q/quit: Quits the debugging session.
Other navigation commands like step and next aren’t that useful since we are sitting right at an
assert statement. You can also just tSHYDULDEOHQDPHVDQGJHWWKHYDOXHV.
You can use p/print expr similar to the -l/--showlocals option to see values within the function:
​ (Pdb) p new_task
​ Task(summar
GRVRPHWKLQJ
RZQHU 1RQHGRQH )DOVHLG 1RQH)
​ (Pdb) p task_id
​ ObjectId('59783bf48204177f2a786893')
​ (Pdb)
Now RXFDQTXLWWKHGHEXJJHUDQGFRQWLQXHRQZLWKWHVWLQJ. ​ (Pdb) q


​ !!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!
​ ===================== 54 tests deselected ======================
​ ========== 1 failed, 54 deselected in 123.40 seconds ===========
If we hadn’t used -x, pWHVWZRXOGKDYHRSHQHGSGEDJDLQDWWKHQH[WIDLOHGWHVW0RUe
information about using the pdb module is available in the PWKRQGRFXPHQWDWLRQ. [20]
\050164\051

Coverage.p'HWHUPLQLQJ+RZ0XFK&RGH,V7HVWHd
Code coverage is a measurement of what percentage of the code under test is being tested ba
test suite. When RXUXQWKHWHVWVIRUWKH7DVNVSURMHFWVRPHRIWKH7DVNVIXQFWLRQDOLW is
executed with everWHVWEXWQRWDOORILW&RGHFRYHUDJHWRROVDUHJUHDWIRUWHOOLQJou which
parts of the sVWHPDUHEHLQJFRPSOHWHO missed bWHVWV.
Coverage.pLVWKHSUHIHUUHG3thon coverage tool that measures code coverage. You’ll use it to
check the Tasks project code under test with pWHVW.
Before RXXVHFRYHUDJHS, RXQHHGWRLQVWDOOLW,PDOVRJRLQJWRKDYHou install a plugin
called pWHVWFRYWKDWZLOODOORZou to call coverage.pIURPStest with some extra pWHVt
options. Since coverage is one of the dependencies of pWHVWFRYLWLVVXIILFLHQWWRLQVWDOOStest-
cov, as it will pull in coverage.p:​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWFRv​
​ Collecting pWHVWFRv
​ Using cached pWHVWBFRYS2.pQRQHDQ.whl
​ Collecting coverage>=3.7.1 (from pWHVWFRY)
​ Using cached coverage-4.4.1-cp36-cp36m-macosx_10_10_x86_64.whl
​ ​...​
​ Installing collected packages: coverage, pWHVWFRv
​ SuccessfullLQVWDOOHGFRYHUDJHStest-cov-2.5.1
Let’s run the coverage report on version 2 of Tasks. If RXVWLOOKDYHWKHILUVWYHUVLRQRIWKH7DVNs
project installed, uninstall it and install version 2: ​ ​$ ​​pip​​ ​​uninstall ​​ ​​tasks​
​ Uninstalling tasks-0.1.0:
​ /path/to/venv/bin/tasks
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHVWDVNVHJJOLQk
​ Proceed (Q "y
​ SuccessfullXQLQVWDOOHGWDVNV0
​ ​$ ​​cd​​ ​​/path/to/code/ch7/tasks_proj_v2​
​ ​$ ​​pip​​ ​​install ​​ ​​-e​​ ​​.​
​ Obtaining file:///path/to/code/ch7/tasks_proj_v2
​ ​...​
​ Installing collected packages: tasks
​ Running setup.pGHYHORSIRUWDVNs
​ ​SuccessfullLQVWDOOHGWDVNV
​ ​$ ​​pip​​ ​​list​
​ ​...​
​ tasks (0.1.1, /path/to/code/ch7/tasks_proj_v2/src)
​ ​...​
\050165\051

Now that the next version of Tasks is installed, we can run our baseline coverage report:​ ​$ ​​cd​​ ​​/path/to/code/ch7/tasks_proj_v2​
​ ​$ ​​pWHVt​​ ​​--cov=src ​
​ ===================== test session starts ======================
​ plugins: mock-1.6.2, cov-2.5.1
​ collected 62 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ............................
​ tests/func/test_add_varietS ............
​ tests/func/test_api_exceptions.p.
​ tests/func/test_unique_id.p.
​ tests/unit/test_cli.p.
​ tests/unit/test_task.p.

​ ---------- coverage: platform darwin, pWKRQILQDO-
​ Name Stmts Miss Cover
​ --------------------------------------------------
​ src/tasks/__init__.p%
​ src/tasks/api.p%
​ src/tasks/cli.p%
​ src/tasks/config.p%
​ src/tasks/tasksdb_pPRQJRS 74 74 0%
​ src/tasks/tasksdb_tinGES 32 4 88%
​ --------------------------------------------------
​ TOTAL 250 126 50%


​ ================== 62 passed in 0.47 seconds ===================
Since the current directorLVWDVNVBSURMBYDQGWKHVRXUFHFRGHXQGHUWHVWLVDOOZLWKLQVUF,
adding the option --cov=src generates a coverage report for that specific directorXQGHUWHVt
onl.
As RXFDQVHHVRPHRIWKHILOHVKDYHSUHWW low, to even 0%, coverage. These are good
reminders: tasksdb_pPRQJRS is at 0% because we’ve turned off testing for MongoDB in this
version. Some of the others are prettORZ7KHSURMHFWZLOOGHILQLWHO have to put tests in place
for all of these areas before it’s readIRUSULPHWLPH.
A couple of files I thought would have a higher coverage percentage are api.pDQd
tasksdb_tinGES. Let’s look at tasksdb_tinGES and see what’s missing. I find the best waWo
do that is to use the HTML reports.
\050166\051

If RXUXQFRYHUDJHS again with --cov-report=html, an HTML report is generated:​ ​$ ​​pWHVt​​ ​​--cov=src ​​ ​​--cov-report=html​
​ ===================== test session starts ======================
​ plugins: mock-1.6.2, cov-2.5.1
​ collected 62 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ............................
​ tests/func/test_add_varietS ............
​ tests/func/test_api_exceptions.p.
​ tests/func/test_unique_id.p.
​ tests/unit/test_cli.p.
​ tests/unit/test_task.p.

​ ---------- coverage: platform darwin, pWKRQILQDO-
​ Coverage HTML written to dir htmlcov


​ ================== 62 passed in 0.45 seconds ===================
You can then open htmlcov/index.html in a browser, which shows the output in the following
screen:
Clicking on tasksdb_tinGES shows a report for the one file. The top of the report shows the
percentage of lines covered, plus how manOLQHVZHUHFRYHUHGDQGKRZPDQ are missing, as
shown in the following screen:
\050167\051

Scrolling down, RXFDQVHHWKHPLVVLQJOLQHVDVVKRZQLQWKHQH[WVFUHHQ:
Even though this screen isn’t the complete page for this file, it’s enough to tell us that:1. We’re not testing list_tasks() with owner set.
2. We’re not testing update() or delete().
3. We maQRWEHWHVWLQJXQLTXHBLG WKRURXJKO enough.
Great. We can put those on our testing to-do list, along with testing the config sVWHP.
While code coverage tools are extremelXVHIXOVWULYLQJIRUFRYHUDJHFDQEHGDQJHURXV.
When RXVHHFRGHWKDWLVQWWHVWHGLWPLJKWPHDQDWHVWLVQHHGHG%XWLWDOVRPLJKWPHDQWKDt
there’s some functionalitRIWKHVstem that isn’t needed and could be removed. Like all
software development tools, code coverage analVLVGRHVQRWUHSODFHWKLQNLQJ.
Quite a few more options and features of both coverage.pDQGStest-cov are available. More
\050168\051

information can be found in the coverage.py[21] and pWHVWFRv[22] documentation.
\050169\051

mock: Swapping Out Part of the SVWHm
The mock package is used to swap out pieces of the sVWHPWRLVRODWHELWVRIRXUFRGHXQGHUWHVt
from the rest of the sVWHP0RFNREMHFWVDUHVRPHWLPHVFDOOHGWHVWGRXEOHVVSLHVIDNHVRr
stubs. Between pWHVWVRZQPRQNHpatch fixture (covered in ​
Using monkeypatch ​) and mock,
RXVKRXOGKDYHDOOWKHWHVWGRXEOHIXQFWLRQDOLW RXQHHG.
Mocks Are Weird
If this is the first time RXYHHQFRXQWHUHGWHVWGRXEOHVOLNHPRFNVVWXEVDQGVSLHV,
it’s gonna get real weird real fast. It’s fun though, and quite powerful.
The mock package is shipped as part of the PWKRQVWDQGDUGOLEUDU as unittest.mock as of PWKRn
3.3. In earlier versions, it’s available as a separate P3,LQVWDOODEOHSDFNDJHDVDUROOLQJEDFNSRUW.
What that means is that RXFDQXVHWKH3PI version of mock with PWKRQWKURXJKWKHODWHVt
PWKRQYHUVLRQDQGJHWWKHVDPHIXQFWLRQDOLW as the latest PWKRQPRFN+RZHYHUIRUXVHZLWh
pWHVWDSOXJLQFDOOHGStest-mock has some conveniences that make it mSUHIHUUHGLQWHUIDFHWo
the mock sVWHP.
For the Tasks project, we’ll use mock to help us test the command-line interface. In
Coverage.py: Determining How Much Code Is Tested
​, RXVDZWKDWRXUFOLS file wasn’t being
tested at all. We’ll start to fix that now. But let’s first talk about strateg.
An earlGHFLVLRQLQWKH7DVNVSURMHFWZDVWRGRPRVWRIWKHIXQFWLRQDOLW testing through api.p.
Therefore, it’s a reasonable decision that the command-line testing doesn’t have to be complete
functionalitWHVWLQJ:HFDQKDYHDIDLUDPRXQWRIFRQILGHQFHWKDWWKHVstem will work through
the CLI if we mock the API laHUGXULQJ&/,WHVWLQJ,WVDOVRDFRQYHQLHQWGHFLVLRQDOORZLQJXs
to look at mocks in this section.
The implementation of the Tasks CLI uses the Click third-partFRPPDQGOLQHLQWHUIDFHSDFNDJH.
[23]
There are manDOWHUQDWLYHVIRULPSOHPHQWLQJD&/,LQFOXGLQJ3thon’s builtin argparse
module. One of the reasons I chose Click is because it includes a test runner to help us test Click
applications. However, the code in cli.pDOWKRXJKKRSHIXOO tSLFDORI&OLFNDSSOLFDWLRQVLVQRt
obvious.
Let’s pause and install version 3 of Tasks:
​ ​$ ​​cd​​ ​​/path/to/code/ ​
​ ​$ ​​pip​​ ​​install ​​ ​​-e​​ ​​ch7/tasks_proj_v2​
​ ​...​
​ SuccessfullLQVWDOOHGWDVNs
In the rest of this section, RXOOGHYHORSVRPHWHVWVIRUWKHOLVWIXQFWLRQDOLW. Let’s see it in
action to understand what we’re going to test:
\050170\051

​ ​$ ​​tasks​​ ​​list​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​$ ​​tasks​​ ​​add​​ ​​'do something great'​
​ ​$ ​​tasks​​ ​​add​​ ​​"repeat"​​ ​​-o​​ ​​Brian​
​ ​$ ​​tasks​​ ​​add​​ ​​"again and again"​​ ​​--owner​​ ​​Okken​
​ ​$ ​​tasks​​ ​​list​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​ 1 False do something great​
​ ​ 2 Brian False repeat​
​ ​ 3 Okken False again and again​
​ ​$ ​​tasks​​ ​​list​​ ​​-o​​ ​​Brian​
​ ​ ID owner done summar$
​ ​ -- ----- ---- -------​
​ ​ 2 Brian False repeat​
​ ​$ ​​tasks​​ ​​list​​ ​​--owner ​​ ​​Brian​
​ ID owner done summary
​ -- ----- ---- -------
​ 2 Brian False repeat
Looks prettVLPSOH7KHWDVNVOLVWFRPPDQGOLVWVDOOWKHWDVNVZLWKDKHDGHU,WSULQWVWKHKHDGHr
even if the list is empt,WSULQWVMXVWWKHWKLQJVIURPRQHRZQHULIRRURZQHUDUHXVHG+Rw
do we test it? Lots of waVDUHSRVVLEOHEXWZHUHJRLQJWRXVHPRFNV.
Tests that use mocks are necessarilZKLWHER[WHVWVDQGZHKDYHWRORRNLQWRWKHFRGHWRGHFLGe
what to mock and where. The main entrSRLQWLVKHUH:
ch7/tasks_proj_v2/src/tasks/cli.py
​ ​if​ __name__ == ​'__main__'​:
​ tasks_cli()
That’s just a call to tasks_cli():
ch7/tasks_proj_v2/src/tasks/cli.py
​ @click.group(context_settings={​'help_option_names'​: [​'-h'​, ​'--help'​]})
​ @click.version_option(version=​'0.1.1'​)
​ ​def​ tasks_cli():
​ ​"""Run the tasks application."""​
​ ​pass​
Obvious? No. But hold on, it gets better (or worse, depending on RXUSHUVSHFWLYH +HUHVRQe
of the commands—the list command:
ch7/tasks_proj_v2/src/tasks/cli.py
\050171\051

​ @tasks_cli.command(name=​"list"​, help=​"list tasks"​)
​ @click.option(​'-o'​, ​'--owner'​, default=None,
​ help=​'list tasks with this owner'​)
​ ​def​ list_tasks(owner):
​ ​"""​
​ ​ List tasks in db.​

​ ​ If owner given, only list tasks with that owner.​
​ ​ """​
​ formatstr = ​"{: >4} {: >10} {: >5} {}"​
​ ​print​(formatstr.format(​'ID'​, ​'owner'​, ​'done'​, ​'summary'​))
​ ​print​(formatstr.format(​'--'​, ​'-----'​, ​'----'​, ​'-------'​))
​ ​with​ _tasks_db():
​ ​for​ t ​in​ tasks.list_tasks(owner):
​ done = ​'True'​ ​if​ t.done ​else ​ ​'False'​
​ owner = ​''​ ​if​ t.owner ​is​ None ​else ​ t.owner
​ ​print​(formatstr.format(
​ t.id, owner, done, t.summar )
Once RXJHWXVHGWRZULWLQJ&OLFNFRGHLWVQRWWKDWEDG,PQRWJRLQJWRH[SODLQDOORIWKLs
here, as developing command-line code isn’t the focus of the book; however, even though I’m
prettVXUH,KDYHWKLVFRGHULJKWWKHUHVORWVRIURRPIRUKXPDQHUURU7KDWVZK a good set of
automated tests to make sure this works correctlLVLPSRUWDQW.
This list_tasks(owner) function depends on a couple of other functions: tasks_db(), which is a
context manager, and tasks.list_tasks(owner), which is the API function. We’re going to use
mock to put fake functions in place for tasks_db() and tasks.list_tasks(). Then we can call the
list_tasks method through the command-line interface and make sure it calls the tasks.list_tasks()
function correctlDQGGHDOVZLWKWKHUHWXUQYDOXHFRUUHFWO.
To stub tasks_db(), let’s look at the real implementation:
ch7/tasks_proj_v2/src/tasks/cli.py
​ @contextmanager
​ ​def​ _tasks_db():
​ config = tasks.config.get_config()
​ tasks.start_tasks_db(config.db_path, config.db_tSH)
​ ​LHOd​
​ tasks.stop_tasks_db()
The tasks_db() function is a context manager that retrieves the configuration from
tasks.config.get_config(), another external dependencDQGXVHVWKHFRQILJXUDWLRQWRVWDUWa
connection with the database. The LHOGUHOHDVHVFRQWUROWRWKHZLWKEORFNRIOLVWBWDVNV DQd
after everWKLQJLVGRQHWKHGDWDEDVHFRQQHFWLRQLVVWRSSHG.
For the purpose of just testing the CLI behavior up to the point of calling API functions, we
\050172\051

don’t need a connection to an actual database. Therefore, we can replace the context manager
with a simple stub:
ch7/tasks_proj_v2/tests/unit/test_cli.py
​ @contextmanager
​ ​def​ stub_tasks_db():
​ ​LHOd​
Because this is the first time we’ve looked at our test code for test_cli,pOHWVORRNDWWKLVZLWh
all of the import statements:
ch7/tasks_proj_v2/tests/unit/test_cli.py
​ ​from​ click.testing ​import​ CliRunner
​ ​from​ contextlib ​import​ contextmanager
​ ​import​ pWHVt
​ ​from​ tasks.api ​import​ Task
​ ​import​ tasks.cli
​ ​import​ tasks.config


​ @contextmanager
​ ​def​ stub_tasks_db():
​ ​LHOd​
Those imports are for the tests. The onlLPSRUWQHHGHGIRUWKHVWXELVIURPFRQWH[WOLELPSRUt
contextmanager.
We’ll use mock to replace the real context manager with our stub. ActuallZHOOXVHPRFNHU,
which is a fixture provided bWKHStest-mock plugin. Let’s look at an actual test. Here’s a test
that calls tasks list:
ch7/tasks_proj_v2/tests/unit/test_cli.py
​ ​def​ test_list_no_args(mocker):
​ mocker.patch.object(tasks.cli, ​'_tasks_db'​, new=stub_tasks_db)
​ mocker.patch.object(tasks.cli.tasks, ​'list_tasks'​, return_value=[])
​ runner = CliRunner()
​ runner.invoke(tasks.cli.tasks_cli, [​'list'​])
​ tasks.cli.tasks.list_tasks.assert_called_once_with(None)
The mocker fixture is provided bStest-mock as a convenience interface to unittest.mock. The
first line, mocker.patch.object(tasks.cli, ’tasks_db’, new=stub_tasks_db), replaces the tasks_db()
context manager with our stub that does nothing.
The second line, mocker.patch.object(tasks.cli.tasks, ’list_tasks’, return_value=[]), replaces any
calls to tasks.list_tasks() from within tasks.cli to a default MagicMock object with a return value
of an emptOLVW:HFDQXVHWKLVREMHFWODWHUWRVHHLILWZDVFDOOHGFRUUHFWO. The MagicMock
class is a flexible subclass of unittest.Mock with reasonable default behavior and the abilitWo
\050173\051

specifDUHWXUQYDOXHZKLFKLVZKDWZHDUHXVLQJLQWKLVH[DPSOH7KH0RFNDQG0DJLF0RFk
classes (and others) are used to mimic the interface of other code with introspection methods
built in to allow RXWRDVNWKHPKRZWKH were called.
The third and fourth lines of test_list_no_args() use the Click CliRunner to do the same thing as
calling tasks list on the command line.
The final line uses the mock object to make sure the API call was called correctl7Ke
assert_called_once_with() method is part of unittest.mock.Mock objects, which are all listed in
the PWKRQGRFXPHQWDWLRQ.[24]
Let’s look at an almost identical test function that checks the output:
ch7/tasks_proj_v2/tests/unit/test_cli.py
​ @pWHVWIL[WXUH )
​ ​def​ no_db(mocker):
​ mocker.patch.object(tasks.cli, ​'_tasks_db'​, new=stub_tasks_db)


​ ​def​ test_list_print_empt QRBGEPRFNHU :
​ mocker.patch.object(tasks.cli.tasks, ​'list_tasks'​, return_value=[])
​ runner = CliRunner()
​ result = runner.invoke(tasks.cli.tasks_cli, [​'list'​])
​ expected_output = (​" ID owner done summary
​​\n​​"​
​ ​" -- ----- ---- -------​​\n​​"​)
​ ​assert​ result.output == expected_output
This time we put the mock stubbing of tasks_db into a no_db fixture so we can reuse it more
easilLQIXWXUHWHVWV7KHPRFNLQJRIWDVNVOLVWBWDVNV LVWKHVDPHDVEHIRUH7KLVWLPHKRZHYHU,
we are also checking the output of the command-line action through result.output and asserting
equalitWRH[SHFWHGBRXWSXW.
This assert could have been put in the first test, test_list_no_args, and we could have eliminated
the need for two tests. However, I have less faith in mDELOLW to get CLI code correct than other
code, so separating the questions of “Is the API getting called correctl"DQG,VWKHDFWLRn
printing the right thing?” into two tests seems appropriate.
The rest of the tests for the tasks list functionalitGRQWDGGDQ new concepts, but perhaps
looking at several of these makes the code easier to understand:
ch7/tasks_proj_v2/tests/unit/test_cli.py
​ ​def​ test_list_print_manBLWHPV QRBGEPRFNHU :
​ manBWDVNV (
​ Task(​'write chapter'​, ​'Brian'​, True, 1),
​ Task(​'edit chapter'​, ​'Katie'​, False, 2),
​ Task(​'modify chapter'​, ​'Brian'​, False, 3),
​ Task(​'finalize chapter'​, ​'Katie'​, False, 4),
\050174\051

​ )
​ mocker.patch.object(tasks.cli.tasks, ​'list_tasks'​,
​ return_value=manBWDVNV)
​ runner = CliRunner()
​ result = runner.invoke(tasks.cli.tasks_cli, [​'list'​])
​ expected_output = (​" ID owner done summary​​\n​​"​
​ ​" -- ----- ---- -------​​\n​​"​
​ ​" 1 Brian True write chapter​​\n​​"​
​ ​" 2 Katie False edit chapter​​\n​​"​
​ ​" 3 Brian False modify chapter​​\n​​"​
​ ​" 4 Katie False finalize chapter​​\n​​"​)
​ ​assert​ result.output == expected_output


​ ​def​ test_list_dash_o(no_db, mocker):
​ mocker.patch.object(tasks.cli.tasks, ​'list_tasks'​)
​ runner = CliRunner()
​ runner.invoke(tasks.cli.tasks_cli, [​'list'​, ​'-o'​, ​'brian'​])
​ tasks.cli.tasks.list_tasks.assert_called_once_with(​'brian'​)


​ ​def​ test_list_dash_dash_owner(no_db, mocker):
​ mocker.patch.object(tasks.cli.tasks, ​'list_tasks'​)
​ runner = CliRunner()
​ runner.invoke(tasks.cli.tasks_cli, [​'list'​, ​'--owner'​, ​'okken'​])
​ tasks.cli.tasks.list_tasks.assert_called_once_with(​'okken'​)
Let’s make sure theDOOZRUN: ​ ​$ ​​cd​​ ​​/path/to/code/ch7/tasks_proj_v2​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​tests/unit/test_cli.py​
​ =================== test session starts ===================
​ plugins: mock-1.6.2, cov-2.5.1
​ collected 5 items

​ tests/unit/test_cli.pWHVWBOLVWBQRBDUJV3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBHPSW PASSED
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBPDQ_items PASSED
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBR3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBGDVKBRZQHU3$66(D

​ ================ 5 passed in 0.06 seconds =================
\050175\051

Ya7KH pass.
This was an extremelIDVWIO-through of using test doubles and mocks. If RXZDQWWRXVe
mocks in RXUWHVWLQJ,HQFRXUDJHou to read up on unittest.mock in the standard library
documentation,[25]
and about pWHVWPRFNDW http://pSLSthon.org .[26]
\050176\051

tox: Testing Multiple Configurations
tox is a command-line tool that allows RXWRUXQour complete suite of tests in multiple
environments. We’re going to use it to test the Tasks project in multiple versions of PWKRQ.
However, tox is not limited to just PWKRQYHUVLRQV<RXFDQXVHLWWRWHVWZLWKGLIIHUHQt
dependencFRQILJXUDWLRQVDQGGLIIHUHQWFRQILJXUDWLRQVIRUGLIIHUHQWRSHUDWLQJVstems.
In gross generalities, here’s a mental model for how tox works:
tox uses the setup.pILOHIRUWKHSDFNDJHXQGHUWHVWWRFUHDWHDQLQVWDOODEOHVRXUFHGLVWULEXWLRQRf
RXUSDFNDJH,WORRNVLQWR[LQLIRUDOLVWRIHQYLURQPHQWVDQGWKHQIRUHDFKHQYLURQPHQW1. tox creates a virtual environment in a .tox director.
2. tox pip installs some dependencies.
3. tox pip installs RXUSDFNDJHIURPWKHVGLVWLQVWHS.
4. tox runs RXUWHVWV.
After all of the environments are tested, tox reports a summarRIKRZWKH all did.
This makes a lot more sense when RXVHHLWLQDFWLRQVROHWVORRNDWKRZWRPRGLI the Tasks
project to use tox to test PWKRQDQG,FKRVHYHUVLRQVDQGEHFDXVHWKH are both
alreadLQVWDOOHGRQP sVWHP,Iou have different versions installed, go ahead and change the
envlist line to match whichever version RXKDYHRUDUHZLOOLQJWRLQVWDOO.
The first thing we need to do to the Tasks project is add a tox.ini file at the same level as setup.py
—the top project director,PDOVRJRLQJWRPRYHDQthing that’s in pWHVWLQLLQWRWR[LQL.
Here’s the abbreviated code laRXW:
​ tasks_proj_v2/
​ ├── ...
​ ├── setup.py
​ ├── tox.ini
​ ├── src
​ │ └── tasks
​ │ ├── __init__.py
​ │ ├── api.py
​ │ └── ...
​ └── tests
​ ├── conftest.py
​ ├── func
​ │ ├── __init__.py
​ │ ├── test_add.py
​ │ └── ...
​ └── unit
​ ├── __init__.py
\050177\051

​ ├── test_task.py
​ └── ...
Now, here’s what the tox.ini file looks like:
ch7/tasks_proj_v2/tox.ini
​ ​# tox.ini , put in same dir as setup.py ​

​ [tox]
​ envlist = ​py27,py36​

​ [testenv]
​ deps=​pytest ​
​ commands=​pytest ​

​ [pWHVW]
​ addopts = ​-rsxX -l --tb=short --strict ​
​ markers =
​ ​smoke:​ ​Run​ ​the ​ ​smoke​ ​test​ ​test​ ​functions​
​ ​get:​ ​Run​ ​the ​ ​test​ ​functions​ ​that​ ​test​ ​tasks.get()​
Under [tox], we have envlist = pS36. This is a shorthand to tell tox to run our tests using
both pWKRQDQGSthon3.6.
Under [testenv], the deps=pWHVWOLQHWHOOVWR[WRPDNHVXUHStest is installed. If RXKDYe
multiple test dependencies, RXFDQSXWWKHPRQVHSDUDWHOLQHV<RXFDQDOVRVSHFLI which
version to use.
The commands=pWHVWOLQHWHOOVWR[WRUXQStest in each environment.
Under [pWHVW@ZHFDQSXWZKDWHYHUZHQRUPDOO would want to put into pWHVWLQLWRFRQILJXUe
pWHVWDVGLVFXVVHGLQ&KDSWHU ​
Configuration ​. In this case, addopts is used to turn on extra
summarLQIRUPDWLRQIRUVNLSV[IDLOVDQG[SDVVHV UV[; DQGWXUQRQVKRZLQJORFDOYDULDEOHs
in stack traces (-l). It also defaults to shortened stack traces (--tb=short) and makes sure all
markers used in tests are declared first (--strict). The markers section is where the markers are
declared.
Before running tox, RXKDYHWRPDNHVXUHou install it:
​ ​$ ​​pip​​ ​​install ​​ ​​tox​
This can be done within a virtual environment.
Then to run tox, just run, well, tox: ​ ​$ ​​cd​​ ​​/path/to/code/ch7/tasks_proj_v2​
​ ​$ ​​tox​
​ GLOB sdist-make: /path/to/code/ch7/tasks_proj_v2/setup.py
\050178\051

​ pFUHDWHSDWKWRFRGHFKWDVNVBSURMBYWR[S27
​ pLQVWDOOGHSVStest
​ pLQVWSDWKWRFRGHFKWDVNVBSURMBYWR[GLVWWDVNV]Lp
​ pLQVWDOOHGFOLFN IXQFVLJV PRFN ,
​ pbr==3.1.1,p Stest==3.2.1,
​ pWHVWPRFN VL[ WDVNV WLQdb==3.4.0
​ pUXQWHVWV3<7+21+$6+6(('
'
​ pUXQWHVWVFRPPDQGV>@_Stest
​ ================= test session starts ==================
​ plugins: mock-1.6.2
​ collected 62 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ............................
​ tests/func/test_add_varietS ............
​ tests/func/test_api_exceptions.p.
​ tests/func/test_unique_id.p.
​ tests/unit/test_cli.p.
​ tests/unit/test_task.p.

​ ============== 62 passed in 0.25 seconds ===============
​ pFUHDWHSDWKWRFRGHFKWDVNVBSURMBYWR[S36
​ pLQVWDOOGHSVStest
​ pLQVWSDWKWRFRGHFKWDVNVBSURMBYWR[GLVWWDVNV]Lp
​ pLQVWDOOHGFOLFN S==1.4.34,pWHVW ,
​ pWHVWPRFN VL[ WDVNV WLQdb==3.4.0
​ pUXQWHVWV3<7+21+$6+6(('
'
​ pUXQWHVWVFRPPDQGV>@_Stest
​ ================= test session starts ==================
​ plugins: mock-1.6.2
​ collected 62 items

​ tests/func/test_add.p.
​ tests/func/test_add_varietS ............................
​ tests/func/test_add_varietS ............
​ tests/func/test_api_exceptions.p.
​ tests/func/test_unique_id.p.

​ tests/unit/test_cli.p.
​ tests/unit/test_task.p.

​ ============== 62 passed in 0.27 seconds ===============
\050179\051

​ _______________________ summarBBBBBBBBBBBBBBBBBBBBBBB_
​ pFRPPDQGVVXFFHHGHd
​ pFRPPDQGVVXFFHHGHd
​ congratulations :)
At the end, we have a nice summarRIDOOWKHWHVWHQYLURQPHQWVDQGWKHLURXWFRPHV: ​ _________________________ summarBBBBBBBBBBBBBBBBBBBBBBBB_
​ pFRPPDQGVVXFFHHGHd
​ pFRPPDQGVVXFFHHGHd
​ congratulations :)
Doesn’t that give RXDQLFHZDUPKDSS feeling? We got a “congratulations” and a smiley
face.
tox is much more powerful than what I’m showing here and deserves RXUDWWHQWLRQLIou are
using pWHVWWRWHVWSDFNDJHVLQWHQGHGWREHUXQLQPXOWLSOHHQYLURQPHQWV)RUPRUHGHWDLOHd
information, check out the tox documentation. [27]
\050180\051

Jenkins CI: Automating Your Automated Tests
Continuous integration (CI) sVWHPVVXFKDV-HQNLQs[28]
are frequentlXVHGWRODXQFKWHVWVXLWHs
after each code commit. pWHVWLQFOXGHVRSWLRQVWRJHQHUDWHMXQLW[POIRUPDWWHGILOHVUHTXLUHGEy
Jenkins and other CI sVWHPVWRGLVSOD test results.
Jenkins is an open source automation server that is frequentlXVHGIRUFRQWLQXRXVLQWHJUDWLRQ.
Even though PWKRQGRHVQWQHHGWREHFRPSLOHGLWVIDLUO common practice to use Jenkins or
other CI sVWHPVWRDXWRPDWHWKHUXQQLQJDQGUHSRUWLQJRI3thon projects. In this section, RXOl
take a look at how the Tasks project might be set up in Jenkins. I’m not going to walk through
the Jenkins installation. It’s different for everRSHUDWLQJVstem, and instructions are available
on the Jenkins website.
When using Jenkins for running pWHVWVXLWHVWKHUHDUHDIHZ-HQNLQVSOXJLQVWKDWou maILQd
useful. These have been installed for the example:
build-name-setter: This plugin sets the displaQDPHRIDEXLOGWRVRPHWKLQJRWKHUWKDQ,
#2, #3, and so on.
Test Results Anal]HUSOXJLQ7KLVSOXJLQVKRZVWKHKLVWRU of test execution results in a
tabular or graphical format.
You can install plugins bJRLQJWRWKHWRSOHYHO-HQNLQVSDJHZKLFKLVORFDOKRVWPDQDJe
for me as I’m running it locallWKHQFOLFNLQJ0DQDJH-HQNLQV!0DQDJH3OXJLQV!$YDLODEOH.
Search for the plugin RXZDQWZLWKWKHILOWHUER[&KHFNWKHER[IRUWKHSOXJLQou want. I
usuallVHOHFW,QVWDOOZLWKRXW5HVWDUWDQGWKHQRQWKH,QVWDOOLQJ3OXJLQV8SJUDGHVSDJH,VHOHFt
the box that saV5HVWDUW-HQNLQVZKHQLQVWDOODWLRQLVFRPSOHWHDQGQRMREVDUHUXQQLQJ
We’ll look at a complete configuration in case RXGOLNHWRIROORZDORQJIRUWKH7DVNVSURMHFW.
The Jenkins project/item is a “FreestOH3URMHFWQDPHGWDVNVDVVKRZQLQWKHIROORZLQg
screen.
The configuration is a little odd since we’re using versions of the Tasks project that look like
tasks_proj, tasks_proj_v2, and so on, instead of version control. Therefore, we need to
parametrize the project to tell each test session where to install the Tasks project and where to
find the tests. We’ll use a couple of string parameters, as shown in the next screen, to specify
those directories. (Click “This project is parametrized” to get these options available.)
\050181\051

Next, scroll down to Build Environment, and select “Delete workspace before build starts” and
Set Build Name. Set the name to ${start_tests_dir} #${BUILD_NUMBER}, as shown in the
next screen.
Next are the Build steps. On a Mac or Unix-like sVWHPVVHOHFW$GGEXLOGVWHS!([HFXWHVKHOO.
On Windows, select Add build step->Execute Windows batch command. Since I’m on a Mac, I
used an Execute shell block to call a script, as shown here:
The content of the text box is:
\050182\051

​ ​# your paths will be different​
​ code_path=/Users/okken/projects/book/bopWHVW%RRNFRGe
​ run_tests=${code_path}/ch7/jenkins/run_tests.bash
​ bash -e ${run_tests} ${tasks_proj_dir} ${start_tests_dir} ${WORKSPACE}
We use a script instead of putting all of this code into the execute block in Jenkins so that any
changes can be tracked with revision control. Here’s the script:
ch7/jenkins/run_tests.bash
​ ​#!/bin/bash​

​ ​# your paths will be different ​
​ top_path=/Users/okken/projects/book/bopWHVW%RRk
​ code_path=​${​top_path​}​/code
​ venv_path=​${​top_path​}​/venv
​ tasks_proj_dir=​${ ​code_path​}​/$1
​ start_tests_dir=​${​code_path​}​/$2
​ results_dir=$3

​ ​# click and Python 3,​
​ ​# from http://click.pocoo.org/5/python3/ ​
​ export LC_ALL=en_US.utf-8
​ export LANG=en_US.utf-8

​ ​# virtual environment ​
​ source ​${​venv_path​}​/bin/activate

​ ​# install project ​
​ pip install -e ​${​tasks_proj_dir​}​

​ ​# run tests​
​ cd ​${​start_tests_dir​}​
​ pWHVWMXQLW[PO ${​results_dir​}​/results.xml
The bottom line has pWHVWMXQLW[PO ^UHVXOWVBGLU`UHVXOWV[PO7KHMXQLW[POIODJLVWKe
onlWKLQJQHHGHGWRSURGXFHWKHMXQLW[POIRUPDWUHVXOWVILOH-HQNLQVQHHGV.
There are other options: ​ ​$ ​​pWHVt​​ ​​--help​​ ​​|​​ ​​grep​​ ​​junit​
​ --junit-xml=path create junit-xml stOHUHSRUWILOHDWJLYHQSDWK.
​ --junit-prefix=str prepend prefix to classnames in junit-xml output
​ junit_suite_name (string) Test suite name for JUnit report
\050183\051

The --junit-prefix can be used as a prefix for everWHVW7KLVLVXVHIXOZKHQXVLQJWR[DQGou
want to separate the different environment results. junit_suite_name is a config file option that
RXFDQVHWLQWKH>Stest] section of pWHVWLQLRUWR[LQL/DWHUZHOOVHHWKDWWKHUHVXOWVZLOOKDYe
from (pWHVW LQWKHP7RFKDQJHStest to something else, use junit_suite_name.
Next, we’ll add a post-build action: Add post-build action->Publish Junit test result report. Fill in
the Test report XMLs with results.xml, as shown in the next screen.
That’s it! Now we can run tests through Jenkins. Here are the steps:1. Click Save.
2. Go to the top project.
3. Click “Build with Parameters.”
4. Select RXUGLUHFWRULHVDQGFOLFN%XLOG.
5. When it’s done, hover over the title next to the ball in Build HistorDQGVHOHFW&RQVROeOutput from the drop-down menu that appears. (Or click the build name and select
Console Output.)
6. Look at the output and trWRILJXUHRXWZKDWZHQWZURQJ.
You maEHDEOHWRVNLSVWHSVDQGEXW,QHYHUGR,YHQHYHUVHWXSD-HQNLQVMREDQGKDGLt
work the first time. There are usuallGLUHFWRU permission problems or path issues or tSRVLn
mVFULSWDQGVRRQ.
Before we look at the results, let’s run one more version to make it interesting. Click “Build with
Parameters” again. This time, keep the same project directorEXWVHWFKDVWKHVWDUWBWHVWVBGLU,
and click Build. After a refresh of the project top view, RXVKRXOGVHHWKHIROORZLQJVFUHHQ:
\050184\051

Click inside the graph or on the “Latest Test Result” link to see an overview of the test session,
with “+” icons to expand for test failures.
Clicking on anRIWKHIDLOLQJWHVWQDPHVVKRZVou the individual test failure information, as
shown in the next screen. This is where RXVHHWKH IURPStest)” as part of the test name. This
is what’s controlled bWKHMXQLWBVXLWHBQDPHLQDFRQILJILOH.
\050185\051

Going back to Jenkins > tasks, RXFDQFOLFNRQ7HVW5HVXOWV$QDOzer to see a view that lists
which tests haven’t been run for different sessions, along with the pass/fail status (see the
following screen):
You’ve seen how to run pWHVWVXLWHVZLWKYLUWXDOHQYLURQPHQWVIURP-HQNLQVEXWWKHUHDUHTXLWe
a few other topics to explore around using pWHVWDQG-HQNLQVWRJHWKHU<RXFDQWHVWPXOWLSOe
environments with Jenkins bHLWKHUVHWWLQJXSVHSDUDWH-HQNLQVWDVNVIRUHDFKHQYLURQPHQWRr
bKDYLQJ-HQNLQVFDOOWR[GLUHFWO. There’s also a nice plugin called Cobertura that is able to
displaFRYHUDJHGDWDIURPFRYHUDJHS. Check out the Jenkins documentation[29]
for more
information.
\050186\051

unittest: Running Legac7HVWVZLWKStest
unittest is the test framework built into the PWKRQVWDQGDUGOLEUDU. Its purpose is to test PWKRn
itself, but it is often used for project testing, too. pWHVWZRUNVDVDXQLWWHVWUXQQHUDQGFDQUXn
both pWHVWDQGXQLWWHVWWHVWVLQWKHVDPHVHVVLRQ.
Let’s pretend that when the Tasks project started, it used unittest instead of pWHVWIRUWHVWLQJ.
And perhaps there are a lot of tests alreadZULWWHQ)RUWXQDWHO, RXFDQXVHStest to run
unittest-based tests. This might be a reasonable option if RXDUHPLJUDWLQJour testing effort
from unittest to pWHVW<RXFDQOHDYHDOOWKHROGWHVWVDVXQLWWHVWDQGZULWHQHZRQHVLQStest.
You can also graduallPLJUDWHROGHUWHVWVDVou have time, or as changes are needed. There are
a couple of issues that might trip RXXSLQWKHPLJUDWLRQKRZHYHUDQG,OODGGUHVVVRPHRf
those here. First, let’s look at a test written for unittest:
ch7/unittest/test_delete_unittest.py
​ ​import​ unittest
​ ​import​ shutil
​ ​import​ tempfile
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ ​def​ setUpModule():
​ ​"""Make temp dir, initialize DB."""​
​ ​global​ temp_dir
​ temp_dir = tempfile.mkdtemp()
​ tasks.start_tasks_db(str(temp_dir), ​'tiny'​)


​ ​def​ tearDownModule():
​ ​"""Clean up DB, remove temp dir."""​
​ tasks.stop_tasks_db()
​ shutil.rmtree(temp_dir)


​ ​class​ TestNonEmpt XQLWWHVW7HVW&DVH :

​ ​def​ setUp(self):
​ tasks.delete_all() ​# start empty ​
​ ​# add a few items, saving ids​
​ self.ids = []
​ self.ids.append(tasks.add(Task(​'One'​, ​'Brian'​, True)))
​ self.ids.append(tasks.add(Task(​'Two'​, ​'Still Brian'​, False)))
\050187\051

​ self.ids.append(tasks.add(Task(​'Three'​, ​'Not Brian'​, False)))

​ ​def​ test_delete_decreases_count(self):
​ ​# GIVEN 3 items​
​ self.assertEqual(tasks.count(), 3)
​ ​# WHEN we delete one​
​ tasks.delete(self.ids[0])
​ ​# THEN count decreases by 1​
​ self.assertEqual(tasks.count(), 2)
The actual test is at the bottom, test_delete_decreases_count(). The rest of the code is there for
setup and teardown. This test runs fine in unittest: ​ ​$ ​​cd​​ ​​/path/to/code/ch7/unittest​
​ ​$ ​​pWKRn​​ ​​-m​​ ​​unittest​​ ​​-v​​ ​​test_delete_unittest.py​
​ test_delete_decreases_count (test_delete_unittest.TestNonEmpt Rk

​ ----------------------------------------------------------------------
​ Ran 1 test in 0.024s

​ OK
It also runs fine in pWHVW: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_delete_unittest.py​
​ ========================== test session starts ===========================
​ collected 1 items

​ test_delete_unittest.p7HVW1RQ(PSW::test_delete_decreases_count PASSED

​ ======================== 1 passed in 0.02 seconds ========================
This is great if RXMXVWZDQWWRXVHStest as a test runner for unittest. However, our premise is
that the Tasks project is migrating to pWHVW/HWVVD we want to migrate tests one at a time and
run both unittest and pWHVWYHUVLRQVDWWKHVDPHWLPHXQWLOZHDUHFRQILGHQWLQWKHStest
versions. Let’s look at a rewrite for this test and then trUXQQLQJWKHPERWK:
ch7/unittest/test_delete_pWHVWSy
​ ​import​ tasks


​ ​def​ test_delete_decreases_count(db_with_3_tasks):
​ ids = [t.id ​for ​ t ​in​ tasks.list_tasks()]
​ ​# GIVEN 3 items​
\050188\051

​ ​assert​ tasks.count() == 3
​ ​# WHEN we delete one​
​ tasks.delete(ids[0])
​ ​# THEN count decreases by 1​
​ ​assert​ tasks.count() == 2
The fixtures we’ve been using for the Tasks project, including db_with_3_tasks introduced in
Using Multiple Fixtures
​, help set up the database before the test. It’s a much smaller file, even
though the test function itself is almost identical.
Both tests pass individuall:
​ ​$ ​​pWHVt​​ ​​-q​​ ​​test_delete_pWHVWSy​
​ .
​ 1 passed in 0.01 seconds
​ ​$ ​​pWHVt​​ ​​-q​​ ​​test_delete_unittest.py​
​ .
​ 1 passed in 0.02 seconds
You can even run them together if—and onlLIou make sure the unittest version runs first: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_delete_unittest.py​​ ​​test_delete_pWHVWSy​
​ ========================== test session starts ===========================
​ collected 2 items

​ test_delete_unittest.p7HVW1RQ(PSW::test_delete_decreases_count PASSED
​ test_delete_pWHVWS::test_delete_decreases_count[tin@3$66(D

​ ======================== 2 passed in 0.07 seconds ========================
If RXUXQWKHStest version first, something goes haZLUH: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_delete_pWHVWSy​​ ​​test_delete_unittest.py​
​ ========================== test session starts ===========================
​ collected 2 items

​ test_delete_pWHVWS::test_delete_decreases_count[tin@3$66(D
​ test_delete_unittest.p7HVW1RQ(PSW::test_delete_decreases_count PASSED
​ test_delete_unittest.p7HVW1RQ(PSW::test_delete_decreases_count ERROR

​ ================================= ERRORS
=================================
​ _____ ERROR at teardown of TestNonEmptWHVWBGHOHWHBGHFUHDVHVBFRXQWBBBBB_

​ tmpdir_factor BStest.tmpdir.TempdirFactorREMHFWDW[D>
\050189\051

​ request = ​ for !>

​ @pWHVWIL[WXUH VFRSH
VHVVLRQ
SDUDPV >
WLQ'])
​ def tasks_db_session(tmpdir_factorUHTXHVW :
​ temp_dir = tmpdir_factorPNWHPS
WHPS
)
​ tasks.start_tasks_db(str(temp_dir), request.param)
​ LHOG# this is where the testing happens​
​ > tasks.stop_tasks_db()

​ conftest.p:
​ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

​ def stop_tasks_db(): ​# type: () -> None​
​ global _tasksdb
​ > _tasksdb.stop_tasks_db()
​ E AttributeError: 'NoneTSH
REMHFWKDVQRDWWULEXWH
VWRSBWDVNVBGE'

​ ../tasks_proj_v2/src/tasks/api.p$WWULEXWH(UURr
​ =================== 2 passed, 1 error in 0.13 seconds ====================
You can see that something goes wrong at the end, after both tests have run and passed.
Let’s use --setup-show to investigate further: ​ ​$ ​​pWHVt​​ ​​-q​​ ​​--tb=no​​ ​​--setup-show​​ ​​test_delete_pWHVWSy​​ ​​test_delete_unittest.py​

​ SETUP S tmpdir_factory
​ SETUP S tasks_db_session (fixtures used: tmpdir_factor >WLQ]
​ SETUP F tasks_db (fixtures used: tasks_db_session)
​ SETUP S tasks_just_a_few
​ SETUP F db_with_3_tasks (fixtures used: tasks_db, tasks_just_a_few)
​ test_delete_pWHVWS::test_delete_decreases_count[tin]
​ (fixtures used: db_with_3_tasks, tasks_db, tasks_db_session,
​ tasks_just_a_few, tmpdir_factor .
​ TEARDOWN F db_with_3_tasks
​ TEARDOWN F tasks_db
​ test_delete_unittest.p7HVW1RQ(PSW::test_delete_decreases_count.
​ TEARDOWN S tasks_just_a_few
​ TEARDOWN S tasks_db_session[tin]
​ TEARDOWN S tmpdir_factorE
​ 2 passed, 1 error in 0.08 seconds
\050190\051

The session scope teardown fixtures are run after all the tests, including the unittest tests. This
stumped me for a bit until I realized that the tearDownModule() in the unittest module was
shutting down the connection to the database. The tasks_db_session() teardown from pWHVWZDs
then trLQJWRGRWKHVDPHWKLQJDIWHUZDUG.
Fix the problem bXVLQJWKHStest session scope fixture with the unittest tests. This is possible
bDGGLQJ#Stest.mark.usefixtures() decorators at the class or method level:
ch7/unittest/test_delete_unittest_fix.py
​ ​import​ pWHVt
​ ​import​ unittest
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ @pWHVWPDUNXVHIL[WXUHV 'tasks_db_session'​)
​ ​class​ TestNonEmpt XQLWWHVW7HVW&DVH :

​ ​def​ setUp(self):
​ tasks.delete_all() ​# start empty​
​ ​# add a few items, saving ids​
​ self.ids = []
​ self.ids.append(tasks.add(Task(​'One'​, ​'Brian'​, True)))
​ self.ids.append(tasks.add(Task(​'Two'​, ​'Still Brian'​, False)))
​ self.ids.append(tasks.add(Task(​'Three'​, ​'Not Brian'​, False)))
​ ​def​ test_delete_decreases_count(self):
​ ​# GIVEN 3 items​
​ self.assertEqual(tasks.count(), 3)
​ ​# WHEN we delete one ​
​ tasks.delete(self.ids[0])
​ ​# THEN count decreases by 1​
​ self.assertEqual(tasks.count(), 2)
Now both unittest and pWHVWFDQFRRSHUDWHDQGQRWUXQLQWRHDFKRWKHU: ​ ​$ ​​pWHVt​​ ​​-v​​ ​​test_delete_pWHVWSy​​ ​​test_delete_unittest_fix.py​
​ ==================== test session starts =====================
​ plugins: mock-1.6.0, cov-2.5.1
​ collected 2 items

​ test_delete_pWHVWS::test_delete_decreases_count PASSED
​ test_delete_unittest_fix.p7HVW1RQ(PSW::test_delete_decreases_count PASSED

​ ================== 2 passed in 0.02 seconds ==================
\050191\051

Note that this is onlQHFHVVDU for session scope resources shared bXQLWWHVWDQGStest. As
discussed earlier in ​
Marking Test Functions ​, RXFDQDOVRXVHStest markers on unittest tests,
such as @pWHVWPDUNVNLS DQG#Stest.mark.xfail(), and user markers like
@pWHVWPDUNIRR .
Going back to the unittest example, we still used setUp() to save the ids of the tasks. Aside from
highlighting that getting a list of ids from tasks is obviouslDQRYHUORRNHG$3,PHWKRGLWDOVo
points to a slight issue with using pWVWPDUNXVHIL[WXUHVZLWKXQLWWHVWZHFDQWSDVVGDWDIURPa
fixture to a unittest function directl.
However, RXFDQSDVVLWWKURXJKWKHFOVREMHFWWKDWLVSDUWRIWKHUHTXHVWREMHFW,QWKHQH[t
example, setUp() code has been moved into a function scope fixture that passes the ids through
request.cls.ids:
ch7/unittest/test_delete_unittest_fix2.py
​ ​import​ pWHVt
​ ​import​ unittest
​ ​import​ tasks
​ ​from​ tasks ​import​ Task


​ @pWHVWIL[WXUH )
​ ​def​ tasks_db_non_empt WDVNVBGEBVHVVLRQUHTXHVW :
​ tasks.delete_all() ​# start empty ​
​ ​# add a few items, saving ids​
​ ids = []
​ ids.append(tasks.add(Task(​'One'​, ​'Brian'​, True)))
​ ids.append(tasks.add(Task(​'Two'​, ​'Still Brian'​, False)))
​ ids.append(tasks.add(Task(​'Three'​, ​'Not Brian'​, False)))
​ request.cls.ids = ids


​ @pWHVWPDUNXVHIL[WXUHV 'tasks_db_non_empty'​)
​ ​class​ TestNonEmpt XQLWWHVW7HVW&DVH :

​ ​def​ test_delete_decreases_count(self):
​ ​# GIVEN 3 items​
​ self.assertEqual(tasks.count(), 3)
​ ​# WHEN we delete one ​
​ tasks.delete(self.ids[0])
​ ​# THEN count decreases by 1​
​ self.assertEqual(tasks.count(), 2)
The test accesses the ids list through self.ids, just like before.
\050192\051

The abilitWRXVHPDUNVKDVDOLPLWDWLRQou cannot use parametrized fixtures with unittest-
based tests. However, when looking at the last example with unittest using pWHVWIL[WXUHVLWs
not that far from rewriting it in pWHVWIRUP5HPRYHWKHXQLWWHVW7HVW&DVHEDVHFODVVDQGFKDQJe
the self.assertEqual() calls to straight assert calls, and RXGEHWKHUH.
Another limitation with running unittest with pWHVWLVWKDWXQLWWHVWVXEWHVWVZLOOVWRSDWWKHILUVt
failure, while unittest will run each subtest, regardless of failures. When all subtests pass, pWHVt
runs all of them. Because RXZRQWVHHDQ false-positive results because of this limitation, I
consider this a minor difference.
\050193\051

Exercises1. The test code in ch2 has a few intentionallIDLOLQJWHVWV8VHSGEZKLOHUXQQLQJWKHVetests. TrLWZLWKRXWWKH[RSWLRQDQGWKHGHEXJJHUZLOORSHQPXOWLSOHWLPHVRQFHIRUHDFh
failure.
2. TrIL[LQJWKHFRGHDQGUHUXQQLQJWHVWVZLWKOISGEWRMXVWUXQWKHIDLOHGWHVWVDQGXVe the debugger. TrLQJRXWGHEXJJLQJWRROVLQDFDVXDOHQYLURQPHQWZKHUHou can play
around and not be worried about deadlines and fixes is important.
3. We noticed lots of missing tests during our coverage exploration. One topic missing is to test tasks.update(). Write some tests of that in the func director.
4. Run coverage.p:KDWRWKHUWHVWVDUHPLVVLQJ",Iou covered api.pGRou think it would be fullWHVWHG?
5. Add some tests to test_cli.pWRFKHFNWKHFRPPDQGOLQHLQWHUIDFHIRUWDVNVXSGDWHXVLQg mock.
6. Run RXUQHZWHVWV DORQJZLWKDOOWKHROGRQHV DJDLQVWDWOHDVWWZR3thon versions with tox.
7. TrXVLQJ-HQNLQVWRJUDSKDOOWKHGLIIHUHQWWDVNVBSURMYHUVLRQVDQGWHVWSHUPXWDWLRQVLQWKe chapters.
\050194\051

What’s Next
You are definitelUHDG to go out and trStest with RXURZQSURMHFWV$QGFKHFNRXWWKe
appendixes that follow. If RXYHPDGHLWWKLVIDU,OODVVXPHou no longer need help with pip
or virtual environments. However, RXPD not have looked at Appendix 3, ​
Plugin Sampler
Pack ​. If RXHQMRed this chapter, it deserves RXUWLPHWRDWOHDVWVNLPWKURXJKLW7KHQ,
Appendix 4, ​
Packaging and Distributing Python Projects ​ provides a quick look at how to share
code through various levels of packaging, and Appendix 5, ​
xUnit Fixtures ​ covers an alternative
stOHRIStest fixtures that closer resembles traditional xUnit testing tools.
Also, keep in touch! Check out the book’s webpage [30]
and use the discussion forum[31] and
errata [32]
pages to help me keep the book lean, relevant, and easWRIROORZ7KLVERRNLVLQWHQGHd
to be a living document. I want to keep it up to date and relevant for everZDYHRIQHZStest
users.
Footnotes
[20]
https://docs.pWKRQRUJOLEUDU/pdb.html
[21]
https://coverage.readthedocs.io
[22]
https://pWHVWFRYUHDGWKHGRFVLo
[23]
http://click.pocoo.org
[24]
https://docs.pWKRQRUJGHYOLEUDU/unittest.mock.html
[25]
https://docs.pWKRQRUJGHYOLEUDU/unittest.mock.html
[26]
https://pSLSthon.org/pSLStest-mock
[27]
https://tox.readthedocs.io
[28]
\050195\051

https://jenkins.io
[29]
https://wiki.jenkins-ci.org/displa-(1.,16&REHUWXUD3OXJLn
[30]
https://pragprog.com/titles/bopWHVt
[31]
https://forums.pragprog.com/forums/438
[32]
https://pragprog.com/titles/bopWHVWHUUDWa
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050196\051

Appendix 1
Virtual Environments
PWKRQYLUWXDOHQYLURQPHQWVHQDEOHou to set up a PWKRQVDQGER[ZLWKLWVRZQVHWRISDFNDJHs
separate from the sVWHPVLWHSDFNDJHVLQZKLFKWRZRUN7KHUHDUHPDQ reasons to use virtual
environments, such as if RXKDYHPXOWLSOHVHUYLFHVUXQQLQJZLWKWKHVDPH3thon installation,
but with different packages and package version requirements. In addition, RXPLJKWILQGLt
handWRNHHSWKHGHSHQGHQWSDFNDJHUHTXLUHPHQWVVHSDUDWHIRUHYHU PWKRQSURMHFWou work
on. Virtual environments let RXGRWKDW.
The P3,YHUVLRQRIYLUWXDOHQYZRUNVLQPRVWHQYLURQPHQWV$VRI3thon 3.3, the venv virtual
environment module is included as part of the standard librar+RZHYHUVRPHSUREOHPVZLWh
venv have been reported on Ubuntu. Since virtualenv works with PWKRQ DQGDVIDUEDFNDs
PWKRQ DQGRQ8EXQWXZHOOXVHYLUWXDOHQYLQWKLVTXLFNRYHUYLHZ.
Here’s how to set up a virtual environment in macOS and Linux:​ ​$ ​​pip​​ ​​install ​​ ​​-U​​ ​​virtualenv​
​ ​$ ​​virtualenv​​ ​​-p​​ ​​/path/to/a/pWKRQH[e ​​ ​​/path/to/env_name​
​ ​$ ​​source ​​ ​​/path/to/env_name/bin/activate ​
​ (env_name) $
​ ​...​​ ​​do​​ ​​RXr ​​ ​​work​​ ​​...​
​ (env_name) $ deactivate
You can also drive the process from PWKRQ: ​ ​$ ​​pWKRQ6​​ ​​-m​​ ​​pip​​ ​​install ​​ ​​-U​​ ​​virtualenv​
​ ​$ ​​pWKRQ6​​ ​​-m​​ ​​virtualenv​​ ​​env_name ​
​ ​$ ​​source ​​ ​​env_name/bin/activate ​
​ (env_name) $
​ ​...​​ ​​do​​ ​​RXr ​​ ​​work​​ ​​...​
​ (env_name) $ deactivate
In Windows, there’s a change to the activate line: ​ ​C:/>​​ ​​pip​​ ​​install ​​ ​​-U​​ ​​virtualenv​
​ ​C:/>​​ ​​virtualenv​​ ​​-p​​ ​​/path/to/a/pWKRQH[e ​​ ​​/path/to/env_name​
​ ​C:/>​​ ​​/path/to/env_name/Scripts/activate.bat​
​ (env_name) C:/>
​ ​...​​ ​​do​​ ​​RXr ​​ ​​work​​ ​​...​
​ (env_name) C:/> deactivate
You can do the same trick of driving everWKLQJIURPWKH3thon executable on Windows as
well.
\050197\051

In practice, setting up a virtual environment can be done in fewer steps. For example, I don’t
often update virtualenv if I know I’ve updated it not too long ago. I also usuallSXWWKHYLUWXDl
environment directorHQYBQDPHGLUHFWO in mSURMHFWVWRSGLUHFWRU.
Therefore, the steps are usuallMXVWWKHIROORZLQJ:​ ​$ ​​cd​​ ​​/path/to/mBSURj​
​ ​$ ​​virtualenv​​ ​​-p​​ ​​$(which​​ ​​pWKRQ)​​ ​​mBSURMBYHQv​
​ ​$ ​​source ​​ ​​mBSURMBYHQYELQDFWLYDWe ​
​ (mBSURMBYHQY $
​ ​...​​ ​​do​​ ​​RXr ​​ ​​work​​ ​​...​
​ (mBSURMBYHQY GHDFWLYDWe
I’ve also seen two additional installation methods that are interesting and could work for RX: 1. Put the virtual environment in the project director DVZDVGRQHLQWKHSUHYLRXVFRGH EXtname the env directorVRPHWKLQJFRQVLVWHQWVXFKDVYHQYRUYHQY7KHEHQHILWRIWKLVLs
that RXFDQSXWYHQYRUYHQYLQour global .gitignore file. The downside is that the
environment name hint in the command prompt just tells RXWKDWou are using a virtual
environment, but not which one.
2. Put all virtual environments into a common directorVXFKDVaYHQYV1RZWKe environment names will be different, letting the command prompt be more useful. You
also don’t need to worrDERXWJLWLJQRUHVLQFHLWVQRWLQour project tree. FinallWKLs
directorLVRQHSODFHWRORRNLIou forget all of the projects RXUHZRUNLQJRQ.
Remember, a virtual environment is a directorZLWKOLQNVEDFNWRWKHSthon.exe file and the
pip.exe file of the site-wide PWKRQYHUVLRQLWVXVLQJ%XWDQthing RXLQVWDOOLVLQVWDOOHGLQWKe
virtual environment directorDQGQRWLQWKHJOREDOVLWHSDFNDJHVGLUHFWRU. When RXUHGRQe
with a virtual environment, RXFDQMXVWGHOHWHWKHGLUHFWRU and it completelGLVDSSHDUV.
I’ve covered the basics and common use case of virtualenv. However, virtualenv is a flexible tool
with manRSWLRQV%HVXUHWRFKHFNRXWYLUWXDOHQYKHOS,WPD preemptivelDQVZHUTXHVWLRQs
RXPD have about RXUVSHFLILFVLWXDWLRQ$OVRWKH3thon Packaging AuthoritGRFVRn
virtualenv [33]
are worth reading if RXVWLOOKDYHTXHVWLRQV.
Footnotes
[33]
https://virtualenv.pSDLo
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050198\051

Appendix 2
pip
pip is the tool used to install PWKRQSDFNDJHVDQGLWLVLQVWDOOHGDVSDUWRIour PWKRn
installation. pip supposedlLVDUHFXUVLYHDFURQm that stands for Pip Installs PWKRQRU3Lp
Installs Packages. (Programmers can be prettQHUG with their humor.) If RXKDYHPRUHWKDn
one version of PWKRQLQVWDOOHGRQour sVWHPHDFKYHUVLRQKDVLWVRZQSLSSDFNDJHPDQDJHU.
BGHIDXOWZKHQou run pip install something, pip will:1. Connect to the P3,UHSRVLWRU at https://pSLSthon.org/pSi
.
2. Look for a package called something.
3. Download the appropriate version of something for RXUYHUVLRQRI3thon and RXr sVWHP.
4. Install something into the site-packages directorRIour PWKRQLQVWDOODWLRQWKDWZDVXVHd to call pip.
This is a gross understatement of what pip does—it also does cool stuff like setting up scripts
defined bWKHSDFNDJHZKHHOFDFKLQJDQGPRUH.
As mentioned, each installation of PWKRQKDVLWVRZQYHUVLRQRISLSWLHGWRLW,Iou’re using
virtual environments, pip and pWKRQDUHDXWRPDWLFDOO linked to whichever PWKRQYHUVLRQou
specified when creating the virtual environment. If RXDUHQWXVLQJYLUWXDOHQYLURQPHQWVDQd
RXKDYHPXOWLSOH3thon versions installed, such as pWKRQDQGSthon3.6, RXZLOOSUREDEOy
want to use pWKRQPSLSRUSthon3.6 -m pip instead of pip directl,WZRUNVMXVWWKHVDPH.
(For the examples in this appendix, I assume RXDUHXVLQJYLUWXDOHQYLURQPHQWVVRWKDWSLp
works just fine as-is.)
To check the version of pip and which version of PWKRQLWVWLHGWRXVHSLSYHUVLRQ:
​ (mBHQY SLSYHUVLRn
​ pip 9.0.1 from /path/to/code/mBHQYOLESthon3.6/site-packages (pWKRQ)
To list the packages RXKDYHFXUUHQWO installed with pip, use pip list. If there’s something there
RXGRQWZDQWDQmore, RXFDQXQLQVWDOOLWZLWKSLSXQLQVWDOOVRPHWKLQJ. ​ (mBHQY SLSOLVt
​ pip (9.0.1)
​ setuptools (36.2.7)
​ wheel (0.29.0)
​ (mBHQY SLSLQVWDOOStest
​ ​...​
​ Installing collected packages: pStest
​ SuccessfullLQVWDOOHGS-1.4.34 pWHVW1
​ (mBHQY SLSOLVt
\050199\051

​ pip (9.0.1)
​ p )
​ pWHVW )
​ setuptools (36.2.7)
​ wheel (0.29.0)
As shown in this example, pip installs the package RXZDQWDQGDOVRDQ dependencies that
aren’t alreadLQVWDOOHG.
pip is prettIOH[LEOH,WFDQLQVWDOOWKLQJVIURPRWKHUSODFHVVXFKDV*LW+XEour own servers, a
shared directorRUDORFDOSDFNDJHou’re developing RXUVHOIDQGLWDOZDs sticks the packages
in site-packages unless RXUHXVLQJ3thon virtual environments.
You can use pip to install packages with version numbers from http://pSLSthon.org
if it’s a
release version P3,NQRZVDERXW:
​ ​$ ​​pip​​ ​​install ​​ ​​pWHVW 1​
You can use pip to install a local package that has a setup.pILOHLQLW: ​ ​$ ​​pip​​ ​​install ​​ ​​/path/to/package ​
Use ./package_name if RXDUHLQWKHVDPHGLUHFWRU as the package to install it locall: ​ ​$ ​​cd​​ ​​/path/just/above/package ​
​ ​$ ​​pip​​ ​​install ​​ ​​mBSDFNDJe ​​ # pip is looking in PyPI for "my_package"​
​ ​$ ​​pip​​ ​​install ​​ ​​./mBSDFNDJe ​​ # now pip looks locally ​
You can use pip to install packages that have been downloaded as zip files or wheels without
unpacking them.
You can also use pip to download a lot of files at once using a requirements.txt file: ​ (mBHQY FDWUHTXLUHPHQWVW[t
​ pWHVW 1
​ pWHVW[GLVW 0
​ (mBHQY SLSLQVWDOOUUHTXLUHPHQWVW[t
​ ​...​
​ SuccessfullLQVWDOOHGDSLSNJH[HFQHWStest-3.2.1 pWHVW[GLVW0
You can use pip to download a bunch of various versions into a local cache of packages, and
then point pip there instead of P3,WRLQVWDOOWKHPLQWRYLUWXDOHQYLURQPHQWVODWHUHYHQZKHn
offline.
The following downloads pWHVWDQGDOOGHSHQGHQFLHV: ​ (mBHQY PNGLUaSLSFDFKe
​ (mBHQY SLSGRZQORDGGaSLSFDFKHStest
\050200\051

​ Collecting pWHVt
​ Using cached pWHVWS2.pQRQHDQ.whl
​ Saved /Users/okken/pipcache/pWHVWS2.pQRQHDQ.whl
​ Collecting p!  IURPStest)
​ Using cached pS2.pQRQHDQ.whl
​ Saved /Users/okken/pipcache/pS2.pQRQHDQ.whl
​ Collecting setuptools (from pWHVW)
​ Using cached setuptools-36.2.7-pS3-none-anZKl
​ Saved /Users/okken/pipcache/setuptools-36.2.7-pS3-none-anZKl
​ SuccessfullGRZQORDGHGStest pVHWXSWRROs
Later, even if RXUHRIIOLQHou can install from the cache: ​ (mBHQY SLSLQVWDOOQRLQGH[ILQGOLQNV aSLSFDFKHStest
​ Collecting pWHVt
​ Collecting p!  IURPStest)
​ ​...​
​ Installing collected packages: pStest
​ SuccessfullLQVWDOOHGS-1.4.34 pWHVW1
This is great for situations like running tox or continuous integration test suites without needing
to grab packages from P3,,DOVRXVHWKLVPHWKRGWRJUDEDEXQFKRISDFNDJHVEHIRUHWDNLQJa
trip so that I can code on the plane.
The PWKRQ3DFNDJLQJ$XWKRULW documentation [34]
is a great resource for more information on
pip.
Footnotes
[34]
https://pip.pSDLo
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050201\051

Appendix 3
Plugin Sampler Pack
Plugins are the booster rockets that enable RXWRJHWHYHQPRUHSRZHURXWRIStest. So many
useful plugins are available, it’s difficult to pick just a handful to showcase. You’ve alreadVHHn
the pWHVWFRYSOXJLQLQ ​
Coverage.py: Determining How Much Code Is Tested ​, and the pWHVW-
mock plugin in ​
mock: Swapping Out Part of the System ​. The following plugins give RXMXVWa
taste of what else is out there.
All of the plugins featured here are available on P3,DQGDUHLQVWDOOHGZLWKSLSLQVWDOOSOXJLQ-
name>.
\050202\051

Plugins That Change the Normal Test Run Flow
The following plugins in some waFKDQJHKRZStest runs RXUWHVWV.
pWHVWUHSHDW5XQ7HVWV0RUH7KDQ2QFe
To run tests more than once per session, use the pWHVWUHSHDWSOXJLQ.[35]
This plugin is useful if
RXKDYHDQLQWHUPLWWHQWIDLOXUHLQDWHVW.
Following is a normal test run of tests that start with test_list from ch7/tasks _proj_v2:
​ ​$ ​​cd​​ ​​/path/to/code/ch7/tasks_proj_v2​
​ ​$ ​​pip​​ ​​install ​​ ​​.​
​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWUHSHDt​
​ ​$ ​​pWHVt​​ ​​-v​​ ​​-k​​ ​​test_list​
​ ===================== test session starts ======================
​ plugins: repeat-0.4.1, mock-1.6.2
​ collected 62 items

​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBQRBDUJV3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBHPSW PASSED
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBPDQ_items PASSED
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBR3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBGDVKBRZQHU3$66(D

​ ===================== 56 tests deselected ======================
​ =========== 6 passed, 56 deselected in 0.10 seconds ============
With the pWHVWUHSHDWSOXJLQou can use --count to run everWKLQJWZLFH: ​ ​$ ​​pWHVt​​ ​​--count=2​​ ​​-v​​ ​​-k​​ ​​test_list​
​ ===================== test session starts ======================
​ plugins: repeat-0.4.1, mock-1.6.2
​ collected 124 items

​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV>@3$66(D
​ tests/func/test_api_exceptions.pWHVWBOLVWBUDLVHV>@3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBQRBDUJV>@3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBQRBDUJV>@3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBHPSW[1/2] PASSED
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBHPSW[2/2] PASSED
\050203\051

​ tests/unit/test_cli.pWHVWBOLVWBSULQWBPDQ_items[1/2] PASSED
​ tests/unit/test_cli.pWHVWBOLVWBSULQWBPDQ_items[2/2] PASSED
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBR>@3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBR>@3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBGDVKBRZQHU>@3$66(D
​ tests/unit/test_cli.pWHVWBOLVWBGDVKBGDVKBRZQHU>@3$66(D

​ ===================== 112 tests deselected =====================
​ ========== 12 passed, 112 deselected in 0.16 seconds ===========
You can repeat a subset of the tests or just one, and even choose to run it 1,000 times overnight if
RXZDQWWRVHHLIou can catch the failure. You can also set it to stop on the first failure.
pWHVW[GLVW5XQ7HVWVLQ3DUDOOHl
UsuallDOOWHVWVUXQVHTXHQWLDOO. And that’s just what RXZDQWLIour tests hit a resource that
can onlEHDFFHVVHGE one client at a time. However, if RXUWHVWVGRQRWQHHGDFFHVVWRa
shared resource, RXFRXOGVSHHGXSWHVWVHVVLRQVE running multiple tests in parallel. The
pWHVW[GLVWSOXJLQDOORZVou to do that. You can specifPXOWLSOHSURFHVVRUVDQGUXQPDQ tests
in parallel. You can even push off tests onto other machines and use more than one computer.
Here’s a test that takes at least a second to run, with parametrization such that it runs ten times:
appendices/xdist/test_parallel.py
​ ​import​ pWHVt
​ ​import​ time


​ @pWHVWPDUNSDUDPHWUL]H 'x'​, list(range(10)))
​ ​def​ test_something(x):
​ time.sleep(1)
Notice that it takes over ten seconds to run normall: ​ ​$ ​​pip​​ ​​install ​​ ​​pWHVW[GLVt​
​ ​$ ​​cd​​ ​​/path/to/code/appendices/xdist​
​ ​$ ​​pWHVt​​ ​​test_parallel.py​
​ ===================== test session starts ======================
​ plugins: xdist-1.20.0, forked-0.2
​ collected 10 items

​ test_parallel.p.

​ ================== 10 passed in 10.07 seconds =================
\050204\051

With the pWHVW[GLVWSOXJLQou can use -n numprocesses to run each test in a subprocess, and
use -n auto to automaticallGHWHFWWKHQXPEHURI&38VRQWKHVstem. Here’s the same test run
on multiple processors:​ ​$ ​​pWHVt​​ ​​-n​​ ​​auto​​ ​​test_parallel.py​
​ ===================== test session starts ======================
​ plugins: xdist-1.20.0, forked-0.2
​ gw0 [10] / gw1 [10] / gw2 [10] / gw3 [10]
​ scheduling tests via LoadScheduling
​ ..........
​ ================== 10 passed in 4.27 seconds ===================
It’s not a silver bullet to speed up RXUWHVWWLPHVE a factor of the number of processors Ru
have—there is overhead time. However, manWHVWLQJVFHQDULRVHQDEOHou to run tests in
parallel. And when the tests are long, RXPD as well let them run in parallel to speed up RXr
test time.
The pWHVW[GLVWSOXJLQGRHVDORWPRUHWKDQZHYHFRYHUHGKHUHLQFOXGLQJWKHDELOLW to offload
tests to different computers altogether, so be sure to read more about the pWHVW[GLVWSOXJLn [36]
on P3,.
pWHVWWLPHRXW3XW7LPH/LPLWVRQ<RXU7HVWs
There are no normal timeout periods for tests in pWHVW+RZHYHULIou’re working with
resources that maRFFDVLRQDOO disappear, such as web services, it’s a good idea to put some
time restrictions on RXUWHVWV.
The pWHVWWLPHRXWSOXJLn[37]
does just that. It allows RXSDVVDWLPHRXWSHULRGRQWKHFRPPDQd
line or mark individual tests with timeout periods in seconds. The mark overrides the command-
line timeout so that the test can be either longer or shorter than the timeout limit.
Let’s run the tests from the previous example (with one-second sleeps) with a half-second
timeout:
​ ​$ ​​cd​​ ​​/path/to/code/appendices/xdist​
​ ​$ ​​pip​​ ​​install ​​ ​​pWHVWWLPHRXt​
​ ​$ ​​pWHVt​​ ​​--timeout=0.5​​ ​​-x​​ ​​test_parallel.py​
​ ===================== test session starts ======================
​ plugins: xdist-1.20.0, timeout-1.2.0, forked-0.2
​ timeout: 0.5s method: signal
​ collected 10 items

​ test_parallel.pF

​ =========================== FAILURES ===========================
​ ______________________ test_something[0] _______________________
\050205\051


​ x = 0

​ @pWHVWPDUNSDUDPHWUL]H
[
OLVW UDQJH  )
​ def test_something(x):
​ > time.sleep(1)
​ E Failed: Timeout >0.5s

​ test_parallel.p)DLOHd
​ !!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!
​ =================== 1 failed in 0.68 seconds ===================
The -x stops testing after the first failure.
\050206\051

Plugins That Alter or Enhance Output
These plugins don’t change how test are run, but theGRFKDQJHWKHRXWSXWou see.
pWHVWLQVWDIDLO6HH'HWDLOVRI)DLOXUHVDQG(UURUVDV7KH Happen
UsuallStest displaVWKHVWDWXVRIHDFKWHVWDQGWKHQDIWHUDOOWKHWHVWVDUHILQLVKHGStest
displaVWKHWUDFHEDFNVRIWKHIDLOHGRUHUURUHGWHVWV,Iour test suite is relativelIDVWWKDWPLJKt
be just fine. But if RXUWHVWVXLWHWDNHVTXLWHDELWRIWLPHou maZDQWWRVHHWKHWUDFHEDFNVDs
theKDSSHQUDWKHUWKDQZDLWXQWLOWKHHQG7KLVLVWKHIXQFWLRQDOLW of the pWHVWLQVWDIDLOSOXJLQ.
[38]
When tests are run with the --instafail flag, the failures and errors appear right awa.
Here’s a test with normal failures at the end:
​ ​$ ​​cd​​ ​​/path/to/code/appendices/xdist​
​ ​$ ​​pWHVt​​ ​​--timeout=0.5​​ ​​--tb=line ​​ ​​--maxfail=2​​ ​​test_parallel.py​
​ =================== test session starts ===================
​ plugins: xdist-1.20.0, timeout-1.2.0, forked-0.2
​ timeout: 0.5s method: signal
​ collected 10 items

​ test_parallel.p)F

​ ======================== FAILURES =========================
​ /path/to/code/appendices/xdist/test_parallel.p)DLOHG7LPHRXW!s
​ /path/to/code/appendices/xdist/test_parallel.p)DLOHG7LPHRXW!s
​ !!!!!!!!! Interrupted: stopping after 2 failures !!!!!!!!!!
​ ================ 2 failed in 1.20 seconds =================
Here’s the same test with --instafail: ​ ​$ ​​pWHVt​​ ​​--instafail ​​ ​​--timeout=0.5​​ ​​--tb=line ​​ ​​--maxfail=2​​ ​​test_parallel.py​
​ =================== test session starts ===================
​ plugins: xdist-1.20.0, timeout-1.2.0, instafail-0.3.0, forked-0.2
​ timeout: 0.5s method: signal
​ collected 10 items

​ test_parallel.pF

​ /path/to/code/appendices/xdist/test_parallel.p)DLOHG7LPHRXW!s

​ test_parallel.pF
\050207\051


​ /path/to/code/appendices/xdist/test_parallel.p)DLOHG7LPHRXW!s

​ !!!!!!!!! Interrupted: stopping after 2 failures !!!!!!!!!!
​ ================ 2 failed in 1.19 seconds =================
The --instafail functionalitLVHVSHFLDOO useful for long-running test suites when someone is
monitoring the test output. You can read the test failures, including the stack trace, without
stopping the test suite.
pWHVWVXJDU,QVWDIDLO&RORUV3URJUHVV%Dr
The pWHVWVXJDUSOXJLn [39]
lets RXVHHVWDWXVQRWMXVWDVFKDUDFWHUVEXWDOVRLQFRORU,WDOVo
shows failure and error tracebacks during execution, and has a cool progress bar to the right of
the shell.
A test without sugar is shown
.
And here’s the test with sugar:
The checkmarks (or x’s for failures) show up as the tests finish. The progress bars grow in real
time, too. It’s quite satisfLQJWRZDWFK.
pWHVWHPRML$GG6RPH)XQWR<RXU7HVWs
\050208\051

The pWHVWHPRMLSOXJLn[40] allows RXWRUHSODFHDOORIWKHWHVWVWDWXVFKDUDFWHUVZLWKHPRMLV.
You can also change the emojis if RXGRQWOLNHWKHRQHVSLFNHGE the plugin author. Although
this project is perhaps an example of silliness, it’s included in this list because it’s a small plugin
and is a good example on which to base RXURZQSOXJLQV.
To demonstrate the emoji plugin in action, following is sample code that produces pass, fail,
skip, xfail, xpass, and error. Here it is with normal output and tracebacks turned off:
Here it is with verbose, -v:
Now, here is the sample code with --emoji:
And then with both -v and --emoji:
It’s a prettIXQSOXJLQEXWGRQWGLVPLVVLWDVVLOO out of hand; it allows RXWRFKDQJHWKHHPRMi
using hook functions. It’s one of the few pWHVWSOXJLQVWKDWGHPRQVWUDWHVKRZWRDGGKRRk
functions to plugin code.
\050209\051

pWHVWKWPO*HQHUDWH+70/5HSRUWVIRU7HVW6HVVLRQs
The pWHVWKWPOSOXJLn[41]
is quite useful in conjunction with continuous integration, or in
sVWHPVZLWKODUJHORQJUXQQLQJWHVWVXLWHV,WFUHDWHVDZHESDJHWRYLHZWKHWHVWUHVXOWVIRUa
pWHVWVHVVLRQ7KH+70/UHSRUWFUHDWHGLQFOXGHVWKHDELOLW to filter for tSHRIWHVWUHVXOW:
passed, skipped, failed, errors, expected failures, and unexpected passes. You can also sort by
test name, duration, or status. And RXFDQLQFOXGHH[WUDPHWDGDWDLQWKHUHSRUWLQFOXGLQg
screenshots or data sets. If RXKDYHUHSRUWLQJQHHGVJUHDWHUWKDQSDVVYVIDLOEHVXUHWRWU out
pWHVWKWPO.
The pWHVWKWPOSOXJLQLVUHDOO easWRVWDUW-XVWDGGKWPO UHSRUWBQDPHKWPO:
​ ​$ ​​cd​​ ​​/path/to/code/appendices/outcomes​
​ ​$ ​​pWHVt​​ ​​--html=report.html ​
​ ====================== test session starts ======================
​ metadata: ...
​ collected 6 items

​ test_outcomes.p)[;VE

​ generated html file: /path/to/code/appendices/outcomes/report.html
​ ============================ ERRORS =============================
​ _________________ ERROR at setup of test_error __________________

​ @pWHVWIL[WXUH )
​ def flakBIL[WXUH :
​ > assert 1 == 2
​ E assert 1 == 2

​ test_outcomes.p$VVHUWLRQ(UURr
​ =========================== FAILURES ============================
​ ___________________________ test_fail ___________________________

​ def test_fail():
​ > assert 1 == 2
​ E assert 1 == 2

​ test_outcomes.p$VVHUWLRQ(UURr
​ 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.08 seconds
​ ​$ ​​open​​ ​​report.html ​
This produces a report that includes the information about the test session and a results and
summarSDJH.
\050210\051

The following screen shows the session environment information and summar:
The next screen shows the summarDQGUHVXOWV:
\050211\051

The report includes JavaScript that allows RXWRILOWHUDQGVRUWDQGou can add extra
information to the report, including images. If RXQHHGWRSURGXFHUHSRUWVIRUWHVWUHVXOWVWKLs
plugin is worth checking out.
\050212\051

Plugins for Static AnalVLs
Static analVLVWRROVUXQFKHFNVDJDLQVWour code without running it. The PWKRQFRPPXQLW has
developed some of these tools. The following plugins allow RXWRUXQDVWDWLFDQDOsis tool
against both RXUFRGHXQGHUWHVWDQGWKHWHVWVWKHPVHOYHVLQWKHVDPHVHVVLRQ6WDWLFDQDOsis
failures show up as test failures.
pWHVWScodestOHStest-pep8: ComplZLWK3thon’s StOH*XLGe
PEP 8 is a stOHJXLGHIRU3thon code.[42]
It is enforced for standard librarFRGHDQGLVXVHGEy
man$LIQRWPRVW3thon developers, open source or otherwise. The pFRGHVWle [43]
command-line tool can be used to check PWKRQVRXUFHFRGHWRVHHLILWFRPSOLHVZLWK3(3.
Use the pWHVWScodestOHSOXJLn[44]
to run pFRGHVWle on code in RXUSURMHFWLQFOXGLQJWHVt
code, with the --pep8 flag. The pFRGHVWle tool used to be called pep8, [45]
and pWHVWSHS8[46] is
available if RXZDQWWRUXQWKHOHJDF tool.
pWHVWIODNH&KHFNIRU6Wle Plus Linting
While pep8 checks for stOHIODNHLVDIXOOOLQWHUWKDWDOVRFKHFNVIRU3(3VWle. The flake8
package [47]
is a collection of different stOHDQGVWDWLFDQDOsis tools all rolled into one. It includes
lots of options, but has reasonable default behavior. With the pWHVWIODNHSOXJLQ, [48]
RXFDn
run all of RXUVRXUFHFRGHDQGWHVWFRGHWKURXJKIODNHDQGJHWDIDLOXUHLIVRPHWKLQJLVQWULJKW.
It checks for PEP 8, as well as for logic errors. Use the --flake8 option to run flake8 during a
pWHVWVHVVLRQ<RXFDQH[WHQGIODNHZLWKSOXJLQVWKDWRIIHUHYHQPRUHFKHFNVVXFKDVIODNH-
docstrings, [49]
which adds pGRFVWle checks for PEP 257, PWKRQVGRFVWULQJFRQYHQWLRQV. [50]
\050213\051

Plugins for Web Development
Web-based projects have their own testing hoops to jump through. Even pWHVWGRHVQWPDNe
testing web applications trivial. However, quite a few pWHVWSOXJLQVKHOSPDNHLWHDVLHU.
pWHVWVHOHQLXP7HVWZLWKD:HE%URZVHr
Selenium is a project that is used to automate control of a web browser. The pWHVWVHOHQLXm
plugin[51]
is the PWKRQELQGLQJIRULW:LWKLWou can launch a web browser and use it to open
URLs, exercise web applications, and fill out forms. You can also programmaticallFRQWUROWKe
browser to test a web site or web application.
pWHVWGMDQJR7HVW'MDQJR$SSOLFDWLRQs
Django is a popular PWKRQEDVHGZHEGHYHORSPHQWIUDPHZRUN,WFRPHVZLWKWHVWLQJKRRNVWKDt
allow RXWRWHVWGLIIHUHQWSDUWVRID'MDQJRDSSOLFDWLRQZLWKRXWKDYLQJWRXVHEURZVHUEDVHd
testing. BGHIDXOWWKHEXLOWLQWHVWLQJVXSSRUWLQ'MDQJRLVEDVHGRQXQLWWHVW7KHStest-django
plugin [52]
allows RXWRXVHStest instead of unittest to gain all the benefits of pWHVW7KHSOXJLn
also includes helper functions and fixtures to speed up test implementation.
pWHVWIODVN7HVW)ODVN$SSOLFDWLRQs
Flask is another popular framework that is sometimes referred to as a microframework. The
pWHVWIODVNSOXJLn [53]
provides a handful of fixtures to assist in testing Flask applications.
Footnotes
[35]
https://pSLSthon.org/pSLStest-repeat
[36]
https://pSLSthon.org/pSLStest-xdist
[37]
https://pSLSthon.org/pSLStest-timeout
[38]
https://pSLSthon.org/pSLStest-instafail
[39]
https://pSLSthon.org/pSLStest-sugar
\050214\051

[40]
https://pSLSthon.org/pSLStest-emoji
[41]
https://pSLSthon.org/pSLStest-html
[42]
https://www.pWKRQRUJGHYSHSVSHS8
[43]
https://pSLSthon.org/pSLScodestOe
[44]
https://pSLSthon.org/pSLStest-pFRGHVWle
[45]
https://pSLSthon.org/pSLSHS8
[46]
https://pSLSthon.org/pSLStest-pep8
[47]
https://pSLSthon.org/pSLIODNH8
[48]
https://pSLSthon.org/pSLStest-flake8
[49]
https://pSLSthon.org/pSLIODNHGRFVWULQJs
[50]
https://www.pWKRQRUJGHYSHSVSHS7
[51]
https://pSLSthon.org/pSLStest-selenium
[52]
https://pSLSthon.org/pSLStest-django
[53]
\050215\051

https://pSLSthon.org/pSLStest-flask
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050216\051

Appendix 4
Packaging and Distributing PWKRQ3URMHFWs
The idea of packaging and distribution seems so serious. Most of PWKRQKDVDUDWKHULQIRUPDl
feeling about it, and now suddenlZHUHWDONLQJSDFNDJLQJDQGGLVWULEXWLRQ+RZHYHU,
sharing code is part of working with PWKRQ7KHUHIRUHLWVLPSRUWDQWWROHDUQWRVKDUHFRGe
properlZLWKWKHEXLOWLQ3thon tools. And while the topic is bigger than what I cover here, it
needn’t be intimidating. All I’m talking about is how to share code in a waWKDWLVPRUe
traceable and consistent than emailing zipped directories of modules.
This appendix is intended to give RXDFRPIRUWDEOHXQGHUVWDQGLQJRIKRZWRVHWXSDSURMHFWVo
that it is installable with pip, how to create a source distribution, and how to create a wheel. This
is enough for RXWREHDEOHWRVKDUHour code locallZLWKDVPDOOWHDP7RVKDUHLWIXUWKHr
through P3,,OOUHIHUou to some other resources. Let’s see how it’s done.
\050217\051

Creating an Installable Module
We’ll start bOHDUQLQJKRZWRPDNHDVPDOOSURMHFWLQVWDOODEOHZLWKSLS)RUDVLPSOHRQHPRGXOe
project, the minimal configuration is small. I don’t recommend RXPDNHLWTXLWHWKLVVPDOOEXWI
want to show a minimal structure in order to build up to something more maintainable, and also
to show how simple setup.pFDQEH+HUHVDVLPSOHGLUHFWRU structure:​ some_module_proj/
​ ├── setup.py
​ └── some_module.py
The code we want to share is in some_module.p:
appendices/packaging/some_module_proj/some_module.py
​ ​def​ some_func():
​ ​return​ 42
To make it installable with pip, we need a setup.pILOH7KLVLVDERXWDVEDUHERQHVDVou can
get:
appendices/packaging/some_module_proj/setup.py
​ ​from​ setuptools ​import​ setup

​ setup(
​ name=​'some_module'​,
​ pBPRGXOHV >'some_module'​]
​ )
One directorZLWKRQHPRGXOHDQGDVHWXSS file is enough to make it installable via pip: ​ ​$ ​​cd​​ ​​/path/to/code/appendices/packaging​
​ ​$ ​​pip​​ ​​install ​​ ​​./some_module_proj​
​ Processing ./some_module_proj
​ Installing collected packages: some-module
​ Running setup.pLQVWDOOIRUVRPHPRGXOHGRQe
​ SuccessfullLQVWDOOHGVRPHPRGXOH0
And we can now use some_module from PWKRQ RUIURPDWHVW : ​ ​$ ​​pWKRn​
​ PWKRQ YFGE0DU)
​ [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
​ TSHKHOSFRSright", "credits" or "license" for more information.
​ ​>>>​​ ​​from​​ ​​some_module ​​ ​​import​​ ​​some_func​
\050218\051

​ ​>>>​​ ​​some_func()​
​ 42
​ ​>>>​​ ​​exit()​
That’s a minimal setup, but it’s not realistic. If RXUHVKDULQJFRGHRGGVDUHou are sharing a
package. The next section builds on this to write a setup.pILOHIRUDSDFNDJH.
\050219\051

Creating an Installable Package
Let’s make this code a package bDGGLQJDQBBLQLWBBS and putting the __init__.pILOHDQd
module in a directorZLWKDSDFNDJHQDPH:​ ​$ ​​tree ​​ ​​some_package_proj/ ​
​ some_package_proj/
​ ├── setup.py
​ └── src
​ └── some_package
​ ├── __init__.py
​ └── some_module.py
The content of some_module.pGRHVQWFKDQJH7KHBBLQLWBBS needs to be written to expose
the module functionalitWRWKHRXWVLGHZRUOGWKURXJKWKHSDFNDJHQDPHVSDFH7KHUHDUHORWVRf
choices for this. I recommend skimming the two sections of the PWKRQGRFXPHQWDWLRn [54]
that
cover this topic.
If we do something like this in __init__.p:
​ ​import​ some_package.some_module
the client code will have to specifVRPHBPRGXOH: ​ ​import​ some_package
​ some_package.some_module.some_func()
However, I’m thinking that some_module.pLVUHDOO our API for the package, and we want
everWKLQJLQLWWREHH[SRVHGWRWKHSDFNDJHOHYHO7KHUHIRUHZHOOXVHWKLVIRUP:
appendices/packaging/some_package_proj/src/some_package/__init__.py
​ ​from​ some_package.some_module ​import​ *
Now the client code can do this instead: ​ ​import​ some_package
​ some_package.some_func()
We also have to change the setup.pILOHEXWQRWPXFK:
appendices/packaging/some_package_proj/setup.py
​ ​from​ setuptools ​import​ setup, find_packages

​ setup(
​ name=​'some_package'​,
\050220\051

​ packages=find_packages(where=​'src'​),
​ package_dir={​''​: ​'src'​},
​ )
Instead of using pBPRGXOHVZHVSHFLI packages.
This is now installable: ​ ​$ ​​cd​​ ​​/path/to/code/appendices/packaging​
​ ​$ ​​pip​​ ​​install ​​ ​​./some_package_proj/ ​
​ Processing ./some_package_proj
​ Installing collected packages: some-package
​ Running setup.pLQVWDOOIRUVRPHSDFNDJHGRQe
​ SuccessfullLQVWDOOHGVRPHSDFNDJH0
and usable: ​ ​$ ​​pWKRn​
​ PWKRQ YFGE0DU)
​ [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
​ TSHKHOSFRSright", "credits" or "license" for more information.
​ ​>>>​​ ​​from​​ ​​some_package ​​ ​​import​​ ​​some_func​
​ ​>>>​​ ​​some_func()​
​ 42
Our project is now installable and in a structure that’s easWREXLOGRQ<RXFDQDGGDWHVWs
directorDWWKHVDPHOHYHORIVUFWRDGGRXUWHVWVLIou want. However, the setup.pILOHLVVWLOl
missing some metadata needed to create a proper source distribution or wheel. It’s just a little bit
more work to make that possible.
\050221\051

Creating a Source Distribution and Wheel
For personal use, the configuration shown in the previous section is enough to create a source
distribution and a wheel. Let’s trLW:​ ​$ ​​cd​​ ​​/path/to/code/appendices/packaging/some_package_proj/ ​
​ ​$ ​​pWKRn​​ ​​setup.py​​ ​​sdist​​ ​​bdist_wheel ​
​ running sdist
​ ​...​
​ warning: sdist: standard file not found:
​ should have one of README, README.rst, README.txt

​ running check
​ warning: check: missing required meta-data: url

​ warning: check: missing meta-data:
​ either (author and author_email)
​ or (maintainer and maintainer_email) must be supplied

​ running bdist_wheel
​ ​...​
​ ​$ ​​ls​​ ​​dist​
​ some_package-0.0.0-pQRQHDQ.whl some_package-0.0.0.tar.gz
Well, with some warnings, a .whl and a .tar.gz file are created. Let’s get rid of those warnings.
To do that, we need to:
Add one of these files: README, README.rst, or README.txt.
Add metadata for url.
Add metadata for either (author and author_email) or (maintainer and maintainer_email).
Let’s also add:
A version number
A license
A change log
It makes sense that RXGZDQWWKHVHWKLQJV,QFOXGLQJVRPHNLQGRI5($'0(DOORZVSHRSOHWo
know how to use the package. The url, author, and author_email (or maintainer) information
makes sense to let users know who to contact if theKDYHLVVXHVRUTXHVWLRQVDERXWWKHSDFNDJH.
A license is important to let people know how theFDQGLVWULEXWHFRQWULEXWHDQGUHXVHWKe
package. And if it’s not open source, saVRLQWKHOLFHQVHGDWD7RFKRRVHDOLFHQVHIRURSHn
source projects, I recommend looking at https://choosealicense.com
.
\050222\051

Those extra bits don’t add too much work. Here’s what I’ve come up with for a minimal default.
The setup.p:
appendices/packaging/some_package_proj_v2/setup.py
​ ​from​ setuptools ​import​ setup, find_packages

​ setup(
​ name=​'some_package'​,
​ description=​'Demonstrate packaging and distribution'​,

​ version=​'1.0'​,
​ author=​'Brian Okken'​,
​ author_email=​'brian@pythontesting.net'​,
​ url=​'https://pragprog.com/book/bopytest/python-testing-with-pytest'​,

​ packages=find_packages(where=​'src'​),
​ package_dir={​''​: ​'src'​},
​ )
You should put the terms of the licensing in a LICENSE file. All of the code in this book follows
the following license:
appendices/packaging/some_package_proj_v2/LICENSE
​ CopULJKW F 7KH3UDJPDWLF3URJUDPPHUV//C

​ All rights reserved.

​ CopULJKWVDSSO to this source code.

​ You maXVHWKHVRXUFHFRGHLQour own projects, however the source code
​ maQRWEHXVHGWRFUHDWHFRPPHUFLDOWUDLQLQJPDWHULDOFRXUVHVERRNV,
​ articles, and the like. We make no guarantees that this source code is fit
​ for anSXUSRVH.
Here’s the README.rst:
appendices/packaging/some_package_proj_v2/README.rst
​ ====================================================
​ some_package: Demonstrate packaging and distribution
​ ====================================================

​ ``some_package`` is the PWKRQSDFNDJHWRGHPRVWUDWHKRZHDV it is
​ to create installable, maintainable, shareable packages and distributions.
\050223\051


​ It does contain one function, called ``some_func()``.

​ .. code-block

​ >>> import some_package
​ >>> some_package.some_func()
​ 42


​ That's it, reall.
The README.rst is formatted in reStructuredText. [55]
I’ve done what manKDYHGRQHEHIRUe
me: I copied a README.rst from an open source project, removed everWKLQJ,GLGQWOLNHDQd
changed everWKLQJHOVHWRUHIOHFWWKLVSURMHFW.
You can also use an ASCII-formatted README.txt or README, but I’m okaZLWh
copSDVWHHGLWLQWKLVLQVWDQFH.
I recommend also adding a change log. Here’s the start of one:
appendices/packaging/some_package_proj_v2/CHANGELOG.rst
​ Changelog
​ =========

​ ------------------------------------------------------

​ 1.0
​ ---

​ Changes:
​ ~~~~~~~~

​ - Initial version.
See http://keepachangelog.com
for some great advice on what to put in RXUFKDQJHORJ$OORf
the changes to tasks_proj over the course of this book have been logged into a CHANGELOG.rst
file.
Let’s see if this was enough to remove the warnings:
​ ​$ ​​cd​​ ​​/path/to/code/appendices/packaging/some_package_proj_v2​
​ ​$ ​​pWKRn​​ ​​setup.py​​ ​​sdist​​ ​​bdist_wheel ​
​ running sdist
​ running build
\050224\051

​ running build_py
​ creating build
​ creating build/lib
​ creating build/lib/some_package
​ copLQJVUFVRPHBSDFNDJHBBLQLWBBSy
​ ​ ->​​ ​​build/lib/some_package​
​ copLQJVUFVRPHBSDFNDJHVRPHBPRGXOHSy
​ ​ ->​​ ​​build/lib/some_package ​
​ installing to build/bdist.macosx-10.6-intel/wheel
​ running install
​ running install_lib
​ creating build/bdist.macosx-10.6-intel
​ creating build/bdist.macosx-10.6-intel/wheel
​ creating build/bdist.macosx-10.6-intel/wheel/some_package
​ copLQJEXLOGOLEVRPHBSDFNDJHBBLQLWBBSy
​ ​ ->​​ ​​build/bdist.macosx-10.6-intel/wheel/some_package ​
​ copLQJEXLOGOLEVRPHBSDFNDJHVRPHBPRGXOHSy
​ ​ ->​​ ​​build/bdist.macosx-10.6-intel/wheel/some_package ​
​ running install_egg_info
​ CopLQJVUFVRPHBSDFNDJHHJJLQIRWo
​ build/bdist.macosx-10.6-intel/wheel/some_package-1.0-pHJJLQIo
​ running install_scripts
​ creating build/bdist.macosx-10.6-intel/wheel/some_package-1.0.dist-info/WHEEL

​ ​$ ​​ls​​ ​​dist​
​ some_package-1.0-pQRQHDQ.whl some_package-1.0.tar.gz
Yep. No warnings.
Now, we can put the .whl and/or .tar.gz files in a local shared directorDQGSLSLQVWDOOWRRXr
heart’s content: ​ ​$ ​​cd​​ ​​/path/to/code/appendices/packaging/some_package_proj_v2​
​ ​$ ​​mkdir ​​ ​​~/packages/ ​
​ ​$ ​​cp​​ ​​dist/some_package-1.0-pQRQHDQ.whl ​​ ​​~/packages​
​ ​$ ​​cp​​ ​​dist/some_package-1.0.tar.gz ​​ ​​~/packages​
​ ​$ ​​pip​​ ​​install ​​ ​​--no-index​​ ​​--find-links=~/packages​​ ​​some_package ​
​ Collecting some_package
​ Installing collected packages: some-package
​ SuccessfullLQVWDOOHGVRPHSDFNDJH0
​ ​$ ​​pip​​ ​​install ​​ ​​--no-index​​ ​​--find-links=./dist​​ ​​some_package==1.0​
​ Requirement alreadVDWLVILHGVRPHBSDFNDJH Ln
​ /path/to/venv/lib/pWKRQVLWHSDFNDJHs
\050225\051

​ ​$​
Now RXFDQFUHDWHour own stash of local project packages from RXUWHDPLQFOXGLQJPXOWLSOe
versions of each, and install them almost as easilDVSDFNDJHVIURP3PI.
\050226\051

Creating a P3,,QVWDOODEOH3DFNDJe
You need to add more metadata to RXUVHWXSS to get a package readWRGLVWULEXWHRQ3PI.
You also need to use a tool such as Twine[56]
to push packages to P3, 7ZLQHLVDFROOHFWLRQRf
utilities to help make interacting with P3,HDV and secure. It handles authentication over
HTTPS to keep RXU3PI credentials secure, and handles the uploading of packages to P3,)
This is now beRQGWKHVFRSHRIWKLVERRN+RZHYHUIRULQIRUPDWLRQDERXWKRZWRVWDUt
contributing through P3,WDNHDORRNDWWKH3thon Packaging User Guide [57]
and the the
P3I [58]
section of the PWKRQGRFXPHQWDWLRQ.
Footnotes
[54]
https://docs.pWKRQRUJWXWRULDOPRGXOHVKWPOSDFNDJHs
[55]
http://docutils.sourceforge.net/rst.html
[56]
https://pSLSthon.org/pSLWZLQe
[57]
https://pWKRQSDFNDJLQJXVHUJXLGHUHDGWKHGRFVLo
[58]
https://docs.pWKRQRUJGLVWXWLOVSDFNDJHLQGH[KWPl
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050227\051

Appendix 5
xUnit Fixtures
In addition to the fixture model described in Chapter 3, ​
pytest Fixtures ​, pWHVWDOVRVXSSRUWs
xUnit stOHIL[WXUHVZKLFKDUHVLPLODUWRM8QLWIRU-DYDFSS8QLWIRU&DQGVRRQ.
Generall[8QLWIUDPHZRUNVXVHDIORZRIFRQWUROWKDWORRNVVRPHWKLQJOLNHWKLV:
​ setup()
​ test_function()
​ teardown()
This is repeated for everWHVWWKDWZLOOUXQStest fixtures can do anWKLQJou need this tSHRf
configuration for and more, but if RXUHDOO want to have setup() and teardown() functions,
pWHVWDOORZVWKDWWRRZLWKVRPHOLPLWDWLRQV.
\050228\051

SQWD[RI[8QLW)L[WXUHs
xUnit fixtures include setup()/teardown() functions for module, function, class, and method
scope:
setup_module()/teardown_module()These run at the beginning and end of a module of tests. TheUXQRQFHHDFK7KHPRGXOe
parameter is optional.
setup_function()/teardown_function() These run before and after top-level test functions that are not methods of a test class. They
run multiple times, once for everWHVWIXQFWLRQ7KHIXQFWLRQSDUDPHWHULVRSWLRQDO.
setup_class()/teardown_class() These run before and after a class of tests. TheUXQRQO once. The class parameter is
optional.
setup_method()/teardown_method() These run before and after test methods that are part of a test class. TheUXQPXOWLSOe
times, once for everWHVWPHWKRG7KHPHWKRGSDUDPHWHULVRSWLRQDO.
Here is an example of all the xUnit fixtures along with a few test functions:
appendices/xunit/test_xUnit_fixtures.py
​ ​def​ setup_module(module):
​ ​print​(f​'​​\n​​setup_module() for {module.__name__}'​)


​ ​def​ teardown_module(module):
​ ​print​(f​'teardown_module() for {module.__name__}'​)


​ ​def​ setup_function(function):
​ ​print​(f​'setup_function() for {function.__name__}'​)


​ ​def​ teardown_function(function):
​ ​print​(f​'teardown_function() for {function.__name__}'​)


​ ​def​ test_1():
​ ​print​(​'test_1()'​)
\050229\051



​ ​def​ test_2():
​ ​print​(​'test_2()'​)


​ ​class​ TestClass:
​ @classmethod
​ ​def​ setup_class(cls):
​ ​print​(f​'setup_class() for class {cls.__name__}'​)

​ @classmethod
​ ​def​ teardown_class(cls):
​ ​print​(f​'teardown_class() for {cls.__name__}'​)

​ ​def​ setup_method(self, method):
​ ​print​(f​'setup_method() for {method.__name__}'​)

​ ​def​ teardown_method(self, method):
​ ​print​(f​'teardown_method() for {method.__name__}'​)

​ ​def​ test_3(self):
​ ​print​(​'test_3()'​)

​ ​def​ test_4(self):
​ ​print​(​'test_4()'​)
I used the parameters to the fixture functions to get the name of the
module/function/class/method to pass to the print statement. You don’t have to use the parameter
names module, function, cls, and method, but that’s the convention.
Here’s the test session to help visualize the control flow: ​ ​$ ​​cd​​ ​​/path/to/code/appendices/xunit​
​ ​$ ​​pWHVt​​ ​​-s​​ ​​test_xUnit_fixtures.py​
​ ============ test session starts =============
​ plugins: mock-1.6.0, cov-2.5.1
​ collected 4 items

​ test_xUnit_fixtures.py
​ setup_module() for test_xUnit_fixtures
​ setup_function() for test_1
​ test_1()
\050230\051

​ .teardown_function() for test_1
​ setup_function() for test_2
​ test_2()
​ .teardown_function() for test_2
​ setup_class() for class TestClass
​ setup_method() for test_3
​ test_3()
​ .teardown_method() for test_3
​ setup_method() for test_4
​ test_4()
​ .teardown_method() for test_4
​ teardown_class() for TestClass
​ teardown_module() for test_xUnit_fixtures

​ ========== 4 passed in 0.01 seconds ==========
\050231\051

Mixing pWHVW)L[WXUHVDQG[8QLW)L[WXUHs
You can mix pWHVWIL[WXUHVDQG[8QLWIL[WXUHV:
appendices/xunit/test_mixed_fixtures.py
​ ​import​ pWHVt


​ ​def​ setup_module():
​ ​print​(​'​​\n​​setup_module() - xUnit'​)


​ ​def​ teardown_module():
​ ​print​(​'teardown_module() - xUnit'​)


​ ​def​ setup_function():
​ ​print​(​'setup_function() - xUnit'​)


​ ​def​ teardown_function():
​ ​print​(​'teardown_function() - xUnit​​\n​​'​)
​ @pWHVWIL[WXUH VFRSH 'module'​)
​ ​def​ module_fixture():
​ ​print​(​'module_fixture() setup - pytest'​)
​ ​LHOd​
​ ​print​(​'module_fixture() teardown - pytest'​)


​ @pWHVWIL[WXUH VFRSH 'function'​)
​ ​def​ function_fixture():
​ ​print​(​'function_fixture() setup - pytest'​)
​ ​LHOd​
​ ​print​(​'function_fixture() teardown - pytest'​)


​ ​def​ test_1(module_fixture, function_fixture):
​ ​print​(​'test_1()'​)


​ ​def​ test_2(module_fixture, function_fixture):
\050232\051

​ ​print​(​'test_2()'​)
You can do it. But please don’t. It gets confusing. Take a look at this: ​ ​$ ​​cd​​ ​​/path/to/code/appendices/xunit​
​ ​$ ​​pWHVt​​ ​​-s​​ ​​test_mixed_fixtures.py​
​ ============ test session starts =============
​ plugins: mock-1.6.0, cov-2.5.1
​ collected 2 items

​ test_mixed_fixtures.py
​ setup_module() - xUnit
​ setup_function() - xUnit
​ module_fixture() setup - pWHVt
​ function_fixture() setup - pWHVt
​ test_1()
​ .function_fixture() teardown - pWHVt
​ teardown_function() - xUnit

​ setup_function() - xUnit
​ function_fixture() setup - pWHVt
​ test_2()
​ .function_fixture() teardown - pWHVt
​ teardown_function() - xUnit

​ module_fixture() teardown - pWHVt
​ teardown_module() - xUnit


​ ========== 2 passed in 0.01 seconds ==========
In this example, I’ve also shown that the module, function, and method parameters to the xUnit
fixture functions are optional. I left them out of the function definition, and it still runs fine.
\050233\051

Limitations of xUnit Fixtures
Following are a few of the limitations of xUnit fixtures:
xUnit fixtures don’t show up in -setup-show and -setup-plan. This might seem like a small
thing, but when RXVWDUWZULWLQJDEXQFKRIIL[WXUHVDQGGHEXJJLQJWKHPou’ll love these
flags.
There are no session scope xUnit fixtures. The largest scope is module.
Picking and choosing which fixtures a test needs is more difficult with xUnit fixtures. If a
test is in a class that has fixtures defined, the test will use them, even if it doesn’t need to.
Nesting is at most three levels: module, class, and method.
The onlZD to optimize fixture usage is to create modules and classes with common
fixture requirements for all the tests in them.
No parametrization is supported at the fixture level. You can still use parametrized tests,
but xUnit fixtures cannot be parametrized.
There are enough limitations of xUnit fixtures that I stronglHQFRXUDJHou to forget RXHYHn
saw this appendix and stick with normal pWHVWIL[WXUHV.
CopULJKWj7KH3UDJPDWLF%RRNVKHOI.
\050234\051

You Ma%H,QWHUHVWHG,Q
Select a cover for more information
A Common-Sense Guide to Data Structures and Algorithms
If RXODVWVDZDOJRULWKPVLQDXQLYHUVLW course or at a job interview, RXUHPLVVLQJRXWRQZKDt
theFDQGRIRUour code. Learn different sorting and searching techniques, and when to use
each. Find out how to use recursion effectivel'LVFRYHUVWUXFWXUHVIRUVSHFLDOL]HGDSSOLFDWLRQV,
such as trees and graphs. Use Big O notation to decide which algorithms are best for RXr
production environment. Beginners will learn how to use these techniques from the start, and
experienced developers will rediscover approaches thePD have forgotten.
Ja:HQJURw
(218 pages) ISBN: 9781680502442 $45.95
Design It!
Don’t engineer bFRLQFLGHQFHGHVLJQLWOLNHou mean it! Grounded bIXQGDPHQWDOVDQGILOOHd
with practical design methods, this is the perfect introduction to software architecture for
programmers who are readWRJURZWKHLUGHVLJQVNLOOV$VNWKHULJKWVWDNHKROGHUVWKHULJKt
questions, explore design options, share RXUGHVLJQGHFLVLRQVDQGIDFLOLWDWHFROODERUDWLYe
workshops that are fast, effective, and fun. Become a better programmer, leader, and designer.
Use RXUQHZVNLOOVWROHDGour team in implementing software with the right capabilities—and
\050235\051

develop awesome software!
Michael Keeling
(350 pages) ISBN: 9781680502091 $42.50
Data Science Essentials in PWKRn
Go from messXQVWUXFWXUHGDUWLIDFWVVWRUHGLQ64/DQG1R64/GDWDEDVHVWRDQHDWZHOO-
organized dataset with this quick reference for the busGDWDVFLHQWLVW8QGHUVWDQGWH[WPLQLQJ,
machine learning, and network analVLVSURFHVVQXPHULFGDWDZLWKWKH1XP3 and Pandas
modules; describe and anal]HGDWDXVLQJVWDWLVWLFDODQGQHWZRUNWKHRUHWLFDOPHWKRGVDQGVHe
actual examples of data analVLVDWZRUN7KLVRQHVWRSVROXWLRQFRYHUVWKHHVVHQWLDOGDWDVFLHQFe
RXQHHGLQ3thon.
Dmitr=LQRYLHv
(224 pages) ISBN: 9781680501841 $29
Practical Programming (2nd edition)
This book is for anRQHZKRZDQWVWRXQGHUVWDQGFRPSXWHUSURJUDPPLQJ<RXOOOHDUQWo
program in a language that’s used in millions of smartphones, tablets, and PCs. You’ll code
along with the book, writing programs to solve real-world problems as RXOHDUQWKe
fundamentals of programming using PWKRQ<RXOOOHDUQDERXWGHVLJQDOJRULWKPVWHVWLQJDQd
debugging, and come awaZLWKDOOWKHWRROVou need to produce qualitFRGH,QWKLVVHFRQd
edition, we’ve updated almost all the material, incorporating the lessons we’ve learned over the
past five HDUVRIWHDFKLQJ3thon to people new to programming.
\050236\051

Paul Gries, Jennifer Campbell, Jason Montojo
(400 pages) ISBN: 9781937785451 $38
Explore It!
Uncover surprises, risks, and potentiallVHULRXVEXJVZLWKH[SORUDWRU testing. Rather than
designing all tests in advance, explorers design and execute small, rapid experiments, using what
theOHDUQHGIURPWKHODVWOLWWOHH[SHULPHQWWRLQIRUPWKHQH[W/HDUQHVVHQWLDOVNLOOVRIDPDVWHr
explorer, including how to anal]HVRIWZDUHWRGLVFRYHUNH points of vulnerabilitKRZWo
design experiments on the flKRZWRKRQHour observation skills, and how to focus RXr
efforts.
Elisabeth Hendrickson
(186 pages) ISBN: 9781937785024 $29
The WaRIWKH:HE7HVWHr
This book is for everRQHZKRQHHGVWRWHVWWKHZHE$VDWHVWHUou’ll automate RXUWHVWV$Va
developer, RXOOEXLOGPRUHUREXVWVROXWLRQV$QGDVDWHDPou’ll gain a vocabularDQGa
means to coordinate how to write and organize automated tests for the web. Follow the testing
pUDPLGDQGOHYHOXSour skills in user interface testing, integration testing, and unit testing.
Your new skills will free RXXSWRGRRWKHUPRUHLPSRUWDQWWKLQJVZKLOHOHWWLQJWKHFRPSXWHUGo
the one thing it’s reallJRRGDWTXLFNO running thousands of repetitive tasks.
Jonathan Rasmusson
\050237\051

(256 pages) ISBN: 9781680501834 $29
Your Code as a Crime Scene
Jack the Ripper and legacFRGHEDVHVKDYHPRUHLQFRPPRQWKDQou’d think. Inspired by
forensic psFKRORJ methods, this book teaches RXVWUDWHJLHVWRSUHGLFWWKHIXWXUHRIour
codebase, assess refactoring direction, and understand how RXUWHDPLQIOXHQFHVWKHGHVLJQ.
With its unique blend of forensic psFKRORJ and code analVLVWKLVERRNDUPVou with the
strategies RXQHHGQRPDWWHUZKDWSURJUDPPLQJODQJXDJHou use.
Adam Tornhill
(218 pages) ISBN: 9781680500387 $36
The Nature of Software Development
You need to get value from RXUVRIWZDUHSURMHFW<RXQHHGLWIUHHQRZDQGSHUIHFW:HFDQt
get RXWKHUHEXWZHFDQKHOSou get to “cheaper, sooner, and better.” This book leads RXIURm
the desire for value down to the specific activities that help good Agile projects deliver better
software sooner, and at a lower cost. Using simple sketches and a few words, the author invites
RXWRIROORZKLVSDWKRIOHDUQLQJDQGXQGHUVWDQGLQJIURPDKDOIFHQWXU of software
development and from his engagement with Agile methods from their verEHJLQQLQJ.
Ron Jeffries
(176 pages) ISBN: 9781941222379 $24
\050238\051

Exercises for Programmers
When RXZULWHVRIWZDUHou need to be at the top of RXUJDPH*UHDWSURJUDPPHUVSUDFWLFHWo
keep their skills sharp. Get sharp and staVKDUSZLWKPRUHWKDQILIW practice exercises rooted in
real-world scenarios. If RXUHDQHZSURJUDPPHUWKHVHFKDOOHQJHVZLOOKHOSou learn what Ru
need to break into the field, and if RXUHDVHDVRQHGSURou can use these exercises to learn that
hot new language for RXUQH[WJLJ.
Brian P. Hogan
(118 pages) ISBN: 9781680501223 $24
Creating Great Teams
People are happiest and most productive if theFDQFKRRVHZKDWWKH work on and who they
work with. Self-selecting teams give people that choice. Build well-designed and efficient teams
to get the most out of RXURUJDQL]DWLRQZLWKVWHSE-step instructions on how to set up teams
quicklDQGHIILFLHQWO. You’ll create a process that works for RXZKHWKHUou need to form
teams from scratch, improve the design of existing teams, or are on the verge of a big team re-
shuffle.
Sand0DPROLDQG'DYLG0ROe
(102 pages) ISBN: 9781680501285 $17
Mazes for Programmers
\050239\051

A book on mazes? Seriousl"<HV1RWEHFDXVHou spend RXUGD creating mazes, or because
RXSDUWLFXODUO like solving mazes. But because it’s fun. Remember when programming used to
be fun? This book takes RXEDFNWRWKRVHGDs when RXZHUHVWDUWLQJWRSURJUDPDQGou
wanted to make RXUFRGHGRWKLQJVGUDZWKLQJVDQGVROYHSX]]OHV,WVIXQEHFDXVHLWOHWVou
explore and grow RXUFRGHDQGUHPLQGVou how it feels to just think. Sometimes it feels like
RXOLYHour life in a maze of twistOLWWOHSDVVDJHVDOODOLNH1RZou can code RXUZD out.
Jamis Buck
(286 pages) ISBN: 9781680500554 $38
Good Math
Mathematics is beautiful—and it can be fun and exciting as well as practical. Good Math is RXr
guide to some of the most intriguing topics from two thousand HDUVRIPDWKHPDWLFVIURm
EgSWLDQIUDFWLRQVWR7XULQJPDFKLQHVIURPWKHUHDOPHDQLQJRIQXPEHUVWRSURRIWUHHVJURXp
sPPHWU, and mechanical computation. If RXYHHYHUZRQGHUHGZKDWOD beRQGWKHSURRIs
RXVWUXJJOHGWRFRPSOHWHLQKLJKVFKRROJHRPHWU, or what limits the capabilities of the
computer on RXUGHVNWKLVLVWKHERRNIRUou.
Mark C. Chu-Carroll
(282 pages) ISBN: 9781937785338 $34
\050240\051

Сообщить о нарушении / Abuse

Все документы на сайте взяты из открытых источников, которые размещаются пользователями. Приносим свои глубочайшие извинения, если Ваш документ был опубликован без Вашего на то согласия.