initial benchmark framework to visualise regression

This commit is contained in:
Casper da Costa-Luis 2016-10-30 16:15:30 +00:00
parent 6b73b5bd29
commit d85e447ce1
5 changed files with 83 additions and 0 deletions

4
.gitignore vendored
View File

@ -39,3 +39,7 @@ nosetests.xml
# PyCharm
.idea
# asv
.asv/
benchmarks/*.pyc

View File

@ -249,6 +249,32 @@ cannot re-upload another with the same version number
updating just the metadata is possible: `[python setup.py] make pypimeta`
UPDATING GH-PAGES
-----------------
The most important file is README.rst, which sould always be kept up-to-date
and in sync with the in-line source documentation. This will affect all of the
following:
- The [main repository site](https://github.com/tqdm/tqdm) which automatically
serves the latest README.rst as well as links to all of github's features.
This is the preferred online referral link for tqdm.
- The [PyPi mirror](https://pypi.python.org/pypi/tqdm) which automatically
serves the latest release built from README.rst as well as links to past
releases.
- Many external web crawlers.
Additionally (less maintained), there exists:
- A [wiki](https://github.com/tqdm/tqdm/wiki) which is publicly editable.
- The [gh-pages project](https://tqdm.github.io/tqdm/) which is built from the
[gh-pages branch](https://github.com/tqdm/tqdm/tree/gh-pages), which is
built using [asv](https://github.com/spacetelescope/asv/).
- The [gh-pages root](https://tqdm.github.io/) which is built from a separate
outdated [github.io repo](https://github.com/tqdm/tqdm.github.io).
QUICK DEV SUMMARY
-----------------

28
asv.conf.json Normal file
View File

@ -0,0 +1,28 @@
{
"version": 1,
"project": "tqdm",
"project_url": "https://github.com/tqdm/tqdm/",
"repo": ".",
"environment_type": "virtualenv",
"show_commit_url": "http://github.com/tqdm/tqdm/commit/",
// "pythons": ["2.7", "3.3"],
// "matrix": {
// "numpy": ["1.6", "1.7"],
// "six": ["", null], // test with and without six installed
// "pip+emcee": [""], // emcee is only available for install with pip.
// },
// "exclude": [
// {"python": "3.2", "sys_platform": "win32"}, // skip py3.2 on windows
// {"environment_type": "conda", "six": null}, // don't run without six on conda
// ],
// "include": [
// // additional env for python2.7
// {"python": "2.7", "numpy": "1.8"},
// // additional env if run on windows+conda
// {"platform": "win32", "environment_type": "conda", "python": "2.7", "libpython": ""},
// ],
"benchmark_dir": "benchmarks",
"env_dir": ".asv/env",
"results_dir": ".asv/results",
"html_dir": ".asv/html",
}

0
benchmarks/__init__.py Normal file
View File

25
benchmarks/benchmarks.py Normal file
View File

@ -0,0 +1,25 @@
# Write the benchmarking functions here.
# See "Writing benchmarks" in the asv docs for more information.
class TimeSuite:
"""
An example benchmark that times the performance of various kinds
of iterating over dictionaries in Python.
"""
def setup(self):
from tqdm import tqdm
self.tqdm = tqdm
try:
self.iterable = xrange(int(6e6))
except:
self.iterable = range(int(6e6))
def time_tqdm(self):
[0 for _ in self.tqdm(self.iterable)]
class MemSuite:
# def mem_list(self):
# return [0] * 256
pass