mirror of https://github.com/python/cpython.git
Documentation for pyclbr and tokenize modules.
This commit is contained in:
parent
3d199af40d
commit
6b103f1e12
|
@ -0,0 +1,58 @@
|
|||
\section{\module{pyclbr} ---
|
||||
Python class browser information}
|
||||
|
||||
\declaremodule{standard}{pyclbr}
|
||||
\modulesynopsis{Supports information extraction for a Python class
|
||||
browser.}
|
||||
\sectionauthor{Fred L. Drake, Jr.}{fdrake@acm.org}
|
||||
|
||||
|
||||
The \module{pyclbr} can be used to determine some limited information
|
||||
about the classes and methods defined in a module. The information
|
||||
provided is sufficient to implement a traditional three-pane class
|
||||
browser. The information is extracted from the source code rather
|
||||
than from an imported module, so this module is safe to use with
|
||||
untrusted source code.
|
||||
|
||||
|
||||
\begin{funcdesc}{readmodule}{module\optional{, path}}
|
||||
% The 'inpackage' parameter appears to be for internal use only....
|
||||
Read a module and return a dictionary mapping class names to class
|
||||
descriptor objects. The parameter \var{module} should be the name
|
||||
of a module as a string; it may be the name of a module within a
|
||||
package. The \var{path} parameter should be a sequence, and is used
|
||||
to augment the value of \code{sys.path}, which is used to locate
|
||||
module source code.
|
||||
\end{funcdesc}
|
||||
|
||||
|
||||
\subsection{Class Descriptor Objects \label{pyclbr-class-objects}}
|
||||
|
||||
The class descriptor objects used as values in the dictionary returned
|
||||
by \function{readmodule()} provide the following data members:
|
||||
|
||||
|
||||
\begin{memberdesc}[class descriptor]{name}
|
||||
The name of the class.
|
||||
\end{memberdesc}
|
||||
|
||||
\begin{memberdesc}[class descriptor]{super}
|
||||
A list of class descriptors which describe the immediate base
|
||||
classes of the class being described. Classes which are named as
|
||||
superclasses but which are not discoverable by
|
||||
\function{readmodule()} are listed as a string with the class name
|
||||
instead of class descriptors.
|
||||
\end{memberdesc}
|
||||
|
||||
\begin{memberdesc}[class descriptor]{methods}
|
||||
A dictionary mapping method names to line numbers.
|
||||
\end{memberdesc}
|
||||
|
||||
\begin{memberdesc}[class descriptor]{file}
|
||||
Name of the file containing the class statement defining the class.
|
||||
\end{memberdesc}
|
||||
|
||||
\begin{memberdesc}[class descriptor]{lineno}
|
||||
The line number of the class statement within the file named by
|
||||
\member{file}.
|
||||
\end{memberdesc}
|
|
@ -0,0 +1,44 @@
|
|||
\section{\module{tokenize} ---
|
||||
Tokenizer for Python source}
|
||||
|
||||
\declaremodule{standard}{tokenize}
|
||||
\modulesynopsis{Lexical scanner for Python source code.}
|
||||
\moduleauthor{Ka Ping Yee}{}
|
||||
\sectionauthor{Fred L. Drake, Jr.}{fdrake@acm.org}
|
||||
|
||||
|
||||
The \module{tokenize} module provides a lexical scanner for Python
|
||||
source code, implemented in Python. The scanner in this module
|
||||
returns comments as tokens as well, making it useful for implementing
|
||||
``pretty-printers,'' including colorizers for on-screen displays.
|
||||
|
||||
The scanner is exposed via single function:
|
||||
|
||||
|
||||
\begin{funcdesc}{tokenize}{readline\optional{, tokeneater}}
|
||||
The \function{tokenize()} function accepts two parameters: one
|
||||
representing the input stream, and one providing an output mechanism
|
||||
for \function{tokenize()}.
|
||||
|
||||
The first parameter, \var{readline}, must be a callable object which
|
||||
provides the same interface as \method{readline()} method of
|
||||
built-in file objects (see section~\ref{bltin-file-objects}). Each
|
||||
call to the function should return one line of input as a string.
|
||||
|
||||
The second parameter, \var{tokeneater}, must also be a callable
|
||||
object. It is called with five parameters: the token type, the
|
||||
token string, a tuple \code{(\var{srow}, \var{scol})} specifying the
|
||||
row and column where the token begins in the source, a tuple
|
||||
\code{(\var{erow}, \var{ecol})} giving the ending position of the
|
||||
token, and the line on which the token was found. The line passed
|
||||
is the \emph{logical} line; continuation lines are included.
|
||||
\end{funcdesc}
|
||||
|
||||
|
||||
All constants from the \refmodule{token} module are also exported from
|
||||
\module{tokenize}, as is one additional token type value that might be
|
||||
passed to the \var{tokeneater} function by \function{tokenize()}:
|
||||
|
||||
\begin{datadesc}{COMMENT}
|
||||
Token value used to indicate a comment.
|
||||
\end{datadesc}
|
Loading…
Reference in New Issue