Python - Our key to Efficiency

mxODBC - Python ODBC Interface


Interface
Connection Objects - Cursor Objects - Type Objects and Constructors - Datatypes - Functions

Subpackages
Adabas - IBM DB2 - DBMaker - Informix - MySQL - PostgreSQL - Oracle - Solid - SybaseASE - SybaseASA - Windows ODBC Manager - Unix iODBC Driver Manager - Unix unixODBC Driver Manager - EasySoft ODBC Bridge - Other DBs

Hints & Links : Examples : Structure : Testing : Threads : Transactions : Stored Procedures : Debugging : Support : Download : Copyright & License : History : Home
Version 2.0.7

Introduction

    The eGenix™ mxODBC™ package provides a Python Database API 2.0 compliant interface to databases that are accessible via the ODBC API. This can either be done through an ODBC manager, e.g. the one that comes with Windows, or iODBC which is a free ODBC manager for Unix now maintained by OpenLink, or directly by linking to the database's ODBC driver.

    Since ODBC is a widely supported standard for accessing databases, it should in general be possible to adapt the package to any ODBC 2.0 compliant database driver / manager. All you have to do is change the include directives to point to the specific files for your database and maybe set some macro switches to disable ODBC APIs that your driver does not provide. See the installation section for details.

    The package supports multiple database interfacing meaning that you can access e.g. two different databases from within one process. Included are several preconfigured subpackages for a wide range of common databases.

    Note: Unlike most of the other database modules available for Python, this package uses the new date & time types provided by another package I wrote, mxDateTime, eliminating the problems you normally face when handling dates before 1.1.1970 and after 2038. This also makes the module Year 2000 safe.

    Commercial Users:
    Users in commercial environments may use this package for an evaluation period of 30 days starting with the day of initial installation. After this period, commercial users must either buy mxODBC CPU licenses from eGenix.com for continued use (more details) or stop using and uninstall the package.

Interface

    The package tries to adhere to the Python DB API Version 1.0 in most details. Many features of the new DB API 2.0 are already supported too. The package will eventually move to the 2.0 DB API version.

    Here is a list of deviations from the specifications:

    • cursor.description doesn't return display_size and internal_size; both values are always None

    • cursor.executemany() (API 2.0) is currently only supported for lists of tuples. Handling of arbitrary sequences of sequences will be available in a future release of mxODBC.

    • db.callproc() is not yet implemented; you can call stored procedure through standard SQL using cursor.execute() though.

      The ODBC standard defines the following syntax for calling stored procedures:

      {call procedure-name [([parameter][,[parameter]]...)]}

    • cursor.nextset() (API 2.0) is currently not supported. Since this is only relevant when using stored procedures it will be implemented when .callproc() is.

    • db.setinputsizes() and db.setoutputsizes() are dummy functions.

    • The type objects / constructors (formerly found in the dbi module defined by DB API 1.0) are only needed if you want to write database independant code (well, at least it helps working in that direction, since you can still use database specific SQL code). If you always stick to ODBC, you might as well use the ODBC datatypes directly.

    • The connection constructor is available under three different names: ODBC() (DB API 1.0), connect() (DB API 2.0) and Connect() (mxODBC specific). See the next section for details on the used parameters. mxODBC also defines a DriverConnect() constructor which is available for ODBC managers and some ODBC drivers.

    • mxODBC uses the DB API 2.0 exception layout. The exception values are either a single string or a tuple having the format (sqlstate, sqltype, errortext, lineno) where lineno refers to the line number in the mxODBC.c file (to ease debugging the package).

    • mxODBC uses the DB API 2.0 conventions for .fetchXXX() return values.

    The ODBC API is very rich in terms of accessing information about what is stored in the database and mxODBC makes many of these APIs available as additional connection and cursor methods. These are all extensions to the DB API, of course, so porting your program to other database interfaces will be more difficult than otherwise -- on the other hand there are ODBC drivers for almost all relational databases available.

    Since most of the parameters and names of the ODBC function names were mapped directly to Python method names (by dropping the SQL prefix and converting them to lower-case) and because of copyright issues, I have not copied the ODBC documentation to this page.

    You can download the MS ODBC reference from e.g. Solid Tech -- Solid uses ODBC as native API for their database, BTW.

    Connection Objects

    You connect to a database via a connection object. All communication from and to the database is done using these objects. They are also the scope of transactions you perform. Each connection can be setup to your specific needs, multiple connections may be opened at the same time.

    mxODBC uses a dedicated object type for connections. Each subpackage defines its own object type, but all share the same name: ConnectionType.

    Connection Object Constructors

    Connect(dsn, user='', password=''[, clear_auto_commit=1])
    This constructor returns a connection object for the given datasource. It accepts keyword arguments. dsn indicates the data source to be used, user and password are optional and used for database login.

    Normally, auto-commit is turned off by the constructor if the database supports transactions. If given, the value clear_auto_commit overrides the default behaviour. Passing a false value disables the clearing of the flag and lets the connection use the database's default COMMIT behaviour.

    Note that a compile time switch (DONT_CLEAR_AUTOCOMMIT) allows altering the default value for clear_auto_commit.

    Use the connection method db.setconnectoption(SQL.AUTOCOMMIT, SQL.AUTOCOMMIT_ON|OFF|DEFAULT) (see below) to adjust the connection's behaviour to your needs.

    With auto-commit turned on, transactions are effectively disabled. The rollback() method will raise a NotSupportedError when used on such a connection.

    If you get an exception during connect telling you that the driver is not capable or does not support transactions, e.g. mxODBC.NotSupportedError: ('S1C00', 84, '[Microsoft][ODBC Excel Driver]Driver not capable ', 4226), please try to connect with clear_auto_commit set to 0.

    ODBC(dsn, user='', password=''[, clear_auto_commit=1])
    Is just an alias for Connect() needed for DB API 1.0 compliance.

    connect(dsn, user='', password=''[, clear_auto_commit=1])
    Is just an alias for Connect() needed for DB API 2.0 compliance.

    DriverConnect(DSN_string[, clear_auto_commit=1])
    This constructor returns a connection object for the given datasource which is managed by an ODBC Driver Manager (e.g. the Windows ODBC Manager or iODBC). It allows passing more information to the database than the standard Connect() constructor.

    Please refer to the ODBC manuals of your ODBC manager and database for the exact syntax of the DSN_string. It typically has these entries: 'DSN=datasource_name;UID=userid;PWD=password;' (case is important !). See Connect() for comments on clear_auto_commit.

    Note that this API is only available if the interface was compiled with the compile time switch HAVE_SQLDriverConnect defined. See the subpackages section and the subpackage's Setup file for details.

    If you get an exception during connect telling you that the driver is not capable or does not support transactions, e.g. mxODBC.NotSupportedError: ('S1C00', 84, '[Microsoft][ODBC Excel Driver]Driver not capable ', 4226), please try to connect with clear_auto_commit set to 0.

    When connecting to a database with transaction support, you should explicitly do a .rollback() or .commit() prior to closing the connection. mxODBC does an automatic rollback of the transaction when the connection is closed if the driver supports transactions.

    Helpers

    For some subpackages, mxODBC also defines a few helpers which you can use to query additional information from the ODBC driver or manager.

    DataSources()

    Returns a dictionary mapping data source names to descriptions.

    This function is only available for ODBC manager and some ODBC drivers which have internal ODBC manager support, e.g. IBM's DB2 ODBC driver.

    Connection Object Methods

    cursor([name])
    Constructs a new Cursor Object with the given name using the connection. If no name is given, the ODBC driver will determine a unique name on its own. You can query this name with cursor.getcursorname() (see the Cursors section).

    Note: This is an extension to the DB-API spec. The specification does not allow any arguments to the constructor.

    close()
    Close the connection now (rather than whenever __del__ is called). The connection will be unusable from this point forward; an Error (or subclass) exception will be raised if any operation is attempted with the connection. The same applies to all cursor objects trying to use the connection.

    commit()
    This method does nothing in case transactions are not supported on the connection or by the database.

    Commit any pending transaction to the database. Note that if the database supports an auto-commit feature, this must be initially off. An interface method may be provided to turn it back on.

    Database modules that do not support transactions should implement this method with void functionality.

    rollback()
    Note: This method is only available if the database subpackage was compiled with transaction support. For ODBC manager subpackages it may raise a NotSupportedError in case the connection does not support transactions.

    In case a database does provide transactions this method causes the the database to roll back to the start of any pending transaction. Closing a connection without committing the changes first will cause an implicit rollback to be performed.

    Additional Connection Methods only available in mxODBC

    The following methods are extensions to the DB-API specification and only available in mxODBC and its subpackages.

    setconnectoption(option, value)
    This method lets you set some ODBC integer options to new values, e.g. to set the transaction isolation level or to turn on auto-commit. option itself must be an integer. Suitable option values are available through the SQL singleton (see below), e.g. SQL.AUTOCOMMIT corresponds to the SQL option (SQL_AUTOCOMMIT in C).

    The method is a direct interface to the ODBC SQLSetConnectOption() function. Please refer to the ODBC documentation for more information.

    Note that while the API function also supports setting character fields, the method currently does not know how to handle these.

    Note for ADABAS users: Adabas can emulate several different SQL dialects. They have introduced an option for this to be set. These are the values you can use: 1 = ADABAS, 2 = DB2, 3 = ANSI, 4 = ORACLE, 5 = SAPR3. The option code is SQL.CONNECT_OPT_DRVR_START + 2 according to the Adabas documentation.

    getconnectoption(option)
    Same as above, except that the corresponding ODBC function now is SQLGetConnectOption(). option must be an integer. Suitable option values are available through the SQL singelton (see below).

    The method returns the data as 32-bit integer. It is up to the callee to decode the integer using the SQL defines.

    getinfo(info_id)
    Interface to the corresponding ODBC function SQLGetInfo(). The info_id must be an integer. Suitable values are available through the SQL singleton (see below).

    The method returns a tuple (integer, string) giving an integer decoding (in native integer byte order) of the first bytes of the API's result as well as the raw buffer data as string. It is up to the callee to decode the data (e.g. using the struct module).

    This API gives you a very wide range of information about the underlying database and its capabilities. See the ODBC documentation for more information.

    nativesql(command)
    This method returns the command as it would have been modified by the driver to pass to the database engine. It is a direct interface to the ODBC API SQLNativeSql().

    In many cases it simply returns the same string. Some drivers unescape ODBC escape sequences in the command string. Syntax checking is usually not applied by this method and errors are only raised in case of command string truncation.

    Note: Not all mxODBC subpackages support this API.

    Connection Object Attributes

    Note: All these attributes are extensions to the DB-API specification. Attributes are not defined for connection objects in the DB-API specification.

    bindmethod
    Attribute to query and set the input variable binding method used by the connection. This can either be BIND_USING_PYTHONTYPE of BIND_USING_SQLTYPE (see the Constants section).

    datetimeformat
    Use this instance variable to set the default output format for date/time/timestamp columns of all cursors created using this connection object. Possible values are:

    DATETIME_DATETIMEFORMAT [default]
    DateTime and DateTimeDelta instances

    TIMEVALUE_DATETIMEFORMAT
    Ticks (numberof seconds since the epoch) and tocks (number of seconds since midnight)

    TUPLE_DATETIMEFORMAT
    Python tuples (see the above table for definitions)

    STRING_DATETIMEFORMAT
    Python strings. The format used depends on the settings in the database. See your database's manuals for the exact format and ways to change it.

    I strongly suggest always using the DateTime/DateTimeDelta instances. Note that changing the values of this attribute will not change the date/time format for existing cursors using this connection.

    stringformat
    Use this instance variable to set or query the default input and output handling for string columns of all cursors created using this connection object. Data conversion on input is dependent on the input binding type.

    Note: This variable only has an effect if mxODBC was compiled with Unicode support. If not, mxODBC will always work in EIGHTBIT_STRINGFORMAT mode.

    Possible values are:

    EIGHTBIT_STRINGFORMAT [default]
    String columns are converted to Python 8-bit strings assuming the connection's encoding setting (see the .encoding attribute of connection objects) and then passed as Unicode to the ODBC driver.

    On output, all string columns are fetched as strings and passed back as Python 8-bit string objects.

    This setting emulates the behaviour of previous mxODBC versions and is the default.

    MIXED_STRINGFORMAT
    Input and output conversion is dependent on the data format the ODBC driver expects or returns for a given column. If the driver returns a string, a Python string is created; if it returns Unicode data, a Python Unicode object is used.

    UNICODE_STRINGFORMAT
    String columns are converted to Python Unicode objects assuming the connection's encoding setting (see the .encoding attribute of connection objects) and then passed as Unicode to the ODBC driver.

    On output, string data is converted to Python Unicode objects in the same way.

    Use this setting if you plan to use Unicode objects with non-Unicode aware databases (e.g. by setting the encoding to UTF-8 -- be careful though: multibyte character encodings usually take up more space and are not necessarily compatible with the database's string functions).

    NATIVE_UNICODE_STRINGFORMAT
    String columns are converted to Python Unicode objects assuming the connection's encoding setting (see the .encoding attribute of connection objects) and then passed as Unicode to the ODBC driver.

    On output, string data is always fetched as Unicode data from the ODBC driver and returned using Python Unicode objects.

    Note that even though mxODBC may report that Unicode support is enabled (the global HAVE_UNICODE_SUPPORT is 1), the ODBC driver may still reject Unicode data. In this case an InternalError of type S1003 is raised whenever trying to read data from the database in this stringformat mode.

    You can use the included test.py script to find out whether the database backend support Unicode or not.

    Note that binary and other plain data columns will still use 8-bit strings for interfacing, since storing this data in Unicode objects would cause trouble. mxODBC will eventually use buffer objects to store binary data in some future version.

    encoding
    Read/write attribute which defines the encoding to use for converting Unicode to 8-bit strings and vice-versa. If set to None (default), Python's default encoding will be used, otherwise it has to be a string providing a valid encoding name, e.g. 'latin-1' or 'utf-8'.

    closed
    Instance variable that is true in case the connection is closed. Any action on a closed connection will result in a ProgrammingError to be raised. This variable can be used to conveniently test for this state.

    converter
    Read/write attribute that sets the converter callback default for all newly created cursors using the connection. It is None per default (meaning to use the standard conversion mechanism). See the data type section for details.

    Cursor Objects

    These objects represent a database cursor, which is used to manage the context of a fetch operation. They are created via the connection method cursor(). Cursors are only active as long as the connection is. Using cursors on a closed connection will result in a ProgrammingError to be raised.

    mxODBC uses a dedicated object type for cursors. Each subpackage defines its own object type, but all share the same name: CursorType.

    Cursor Object Methods

    The following cursor methods are defined in the DB API:

    callproc(procname[, parameters])
    This method is not yet implemented in mxODBC

    Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The result of the call is returned as modified copy of the input sequence. Input parameters are left untouched, output and input/output parameters replaced with possibly new values.

    The procedure may also provide a result set as output. This must then be made available through the standard fetchXXX() methods.

    close()
    Close the cursor now (rather than whenever __del__ is called). The cursor will be unusable from this point forward; an Error (or subclass) exception will be raised if any operation is attempted with the cursor.

    execute(operation[,parameters])
    Prepare and execute a database operation (query or command).

    Parameters must be provided as sequence and will be bound to variables in the operation. Variables are specified using the ODBC variable placeholder '?', e.g. 'SELECT name,id FROM table WHERE amount > ? AND amount < ?' (also see the module attribute paramstyle) and get bound in the order they appear in the SQL statement from left to right.

    A reference to the operation will be retained by the cursor. If the same operation object is passed in again, then the cursor will optimize its behavior by reusing the previously prepared statement. This is most effective for algorithms where the same operation is used, but different parameters are bound to it (many times).

    The parameters may also be specified as list of tuples to e.g. insert multiple rows in a single operation, but this kind of usage is deprecated: executemany() should be used instead.

    Return values are not defined.

    executedirect(operation[,parameters])
    Just like .execute(), except that no perpare step is issued and the operation is not cached. This can result in better performance with some ODBC driver setups.

    Return values are not defined.

    executemany(operation,seq_of_parameters[,direct])
    Prepare a database operation (query or command) and then execute it against all parameter sequences found in the sequence seq_of_parameters.

    mxODBC currently only supports lists of tuples in seq_of_parameters. This will be changed in future versions.

    The same comments as for execute() also apply accordingly to this method.

    If the optional integer direct is given and true, mxODBC will not cache the operation, but submit it for one-time execution to the database. This can result in better performance with some ODBC driver setups.

    Return values are not defined.

    fetchone()
    Fetch the next row of a query result set, returning a single sequence, or None when no more data is available.

    An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet.

    mxODBC will move the associated database cursor by one row only. This behaviour is not guaranteed by the DB API, though.

    fetchmany([size=cursor.arraysize])
    Fetch the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available.

    The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned.

    An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet.

    Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the arraysize attribute. If the size parameter is used, then it is best for it to retain the same value from one fetchmany() call to the next.

    fetchall()
    Fetch all (remaining) rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples). Note that the cursor's arraysize attribute can affect the performance of this operation.

    An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet.

    nextset()
    This method is not yet implemented in mxODBC.

    This method will make the cursor skip to the next available set, discarding any remaining rows from the current set.

    If there are no more sets, the method returns None. Otherwise, it returns a true value and subsequent calls to the fetch methods will return rows from the next result set.

    An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet.

    setinputsizes(sizes)
    This methods does nothing in mxODBC, it is just needed for DB API compliance.

    setoutputsize(size[, column])
    This methods does nothing in mxODBC, it is just needed for DB API compliance.

    Additional Cursor Methods only available in mxODBC

    The following methods are extensions to the DB-API specification and only available in mxODBC and its subpackages.

    Since not all databases support all of these ODBC APIs, they can be selectively switched off at compile time to adapt the interface to the underlying ODBC driver/manager. Please take note of this when using them.

    All of the following catalog methods use the same interface: they do an implicit call to cursor.execute() and return their output in form of a list of rows which that can be fetched with the fetchXXX() methods in the usual way. The methods always return the number of rows in the result set.

    Please refer to the ODBC documentation for more detailed information about parameters and the layout of the result sets. Note: If you pass None as a value where a string would be expected, that entry is converted to NULL before passing it to the underlying API.

    All catalog methods support keywords and use the indicated default values for parameters which are omitted in the call.

    prepare(operation)
    Prepare a database operation (query or command) statement for later execution and set cursor.command. To execute a prepared statement, pass cursor.statement to one of the .executeXXX() methods.

    Return values are not defined.

    Note: this method is unavailable if mxODBC was compiled with compile time switch DISABLE_EXECUTE_CACHE.

    tables(qualifier=None, owner=None, table=None, type=None)
    Same as the corresponding ODBC API function. Catalog method.

    tableprivileges(qualifier=None, owner=None, table=None)
    Same as the corresponding ODBC API function. Catalog method.

    columns(qualifier=None, owner=None, table=None, column=None)
    Same as the corresponding ODBC API function. Catalog method.

    columnprivileges(qualifier=None, owner=None, table=None, column=None)
    Same as the corresponding ODBC API function. Catalog method.

    foreignkeys(primary_qualifier=None, primary_owner=None, pimary_table=None, foreign_qualifier=None, foreign_owner=None, foreign_table=None)
    Same as the corresponding ODBC API function. Catalog method.

    primarykeys(qualifier=None, owner=None, table=None)
    Same as the corresponding ODBC API function. Catalog method.

    procedures(qualifier=None, owner=None, procedure=None)
    Same as the corresponding ODBC API function. Catalog method.

    procedurecolumns(qualifier=None, owner=None, procedure=None, column=None)
    Same as the corresponding ODBC API function. Catalog method.

    specialcolumns(qualifier=None,owner=None,table=None, coltype=SQL.BEST_ROWID,scope=SQL.SCOPE_SESSION, nullable=SQL.NO_NULLS)
    Same as the corresponding ODBC API function. Note: coltype is in a different position than it is in the corresponding C call. Catalog method.

    statistics(qualifier=None, owner=None, table=None, unique=SQL.INDEX_ALL, accuracy=SQL.ENSURE)
    Same as the corresponding ODBC API function. Catalog method.

    gettypeinfo(sqltypecode)
    Same as the corresponding ODBC API function. Catalog method.

    setcursorname(name)
    Sets the name to be associated with the with the cursor object. There is a length limit for names in SQL at 18 characters. An InternalError will be raised if the name is too long or otherwise not useable.

    getcursorname()
    Returns the current cursor name associated with the cursor object. This may either be the name given to the cursor at creation time or a name generated by the ODBC driver for it to use.

    setconverter(converter)
    This method sets the converter function to use for subsequent fetches. Passing None as converter will reset the converter mechanism to its default setting. See the data type section for details on how user-defined converters work.

    freeset()
    Frees any pending result set used by the cursor. If you only fetch some of the rows of large result sets you can optimize memory usage by calling this method.

    Note that .executeXXX() and all the catalog methods do an implicit .freeset() prior to executing a new query.

    If you plan to write cross database applications, use these methods with care since at least some of the databases I know don't support certain APIs or return misleading results.

    Also, be sure to check the correct performance of the methods and executes. I don't want to see you losing your data due to some error I made, or the fact that the ODBC driver of your DB is buggy.

    Cursor Object Attributes

    description
    This read-only attribute is a sequence of 7-item sequences for operations that produce a result set (which may be empty). Each of these sequences contains information describing one result column: (name, type_code, display_size, internal_size, precision, scale, null_ok).

    This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the executeXXX() method yet.

    mxODBC always returns None for display_size and internal_size. This information can be obtained via connection.gettypeinfo().

    The type_code can be interpreted by comparing it to the Type Objects specified in the section below. mxODBC returns the SQL type integers in this field. These are described in the section on Datatypes and are available through the SQL singleton defined at module level.

    rowcount
    This read-only attribute specifies the number of rows that the last executeXXX() produced (for DQL statements like select) or affected (for DML statements like update or insert).

    The attribute is -1 in case no executeXXX() has been performed on the cursor or the rowcount of the last operation is not determinable by the interface.[7]

    arraysize
    This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time.

    mxODBC observes this value with respect to the fetchmany() method, but currently interacts with the database a single row at a time.

    Additional Cursor Attributes only available in mxODBC

    The following attributes are extensions to the DB-API specification and only available in mxODBC and its subpackages.

    datetimeformat
    Attribute to set the output format for date/time/timestamp columns on a per cursor basis. It takes the same values as the connection.datetimeformat instance variable and defaults to the creating connection object's settings for datetimeformat.

    stringformat
    Attribute to set the conversion format for string columns on a per cursor basis. It takes the same values as the connection.stringformat instance variable and defaults to the creating connection object's settings for stringformat.

    command
    Provides access to the last prepared or executed SQL command available through the cursor. If no such command is available, None is returned.

    SQL commands are set by .prepare() and .executeXXX(). They are reset by the catalog methods.

    colcount
    This read-only attribute specifies the number of columns in the current result set.

    The attribute is -1 in case no executeXXX() has been performed on the cursor.

    paramcount
    This read-only attribute specifies the number of parameters in the current prepared command.

    The attribute is -1 in case this information is not available.

    closed
    This read-only attribute is true if the cursor or the underlying connection was closed by calling the .close() method.

    Any action on a closed connection or cursor will result in a ProgrammingError to be raised. This variable can be used to conveniently test for this state.

    Type Objects and Constructors

      Since many database have problems recognizing some column's or parameter's type beforehand (e.g. for LONGs and date/time values), the DB API provides a set of standard constructors to create objects that can hold special values. When passed to the cursor methods, the module can then detect the proper type of the input parameter and bind it accordingly.

      In mxODBC these constructors are not really needed: it uses the objectes defined in mxDateTime for date/time values and is able to pass strings and buffer objects to LONG and normal CHAR columns without problems. You only need them to write code that is portable across database interfaces.

      A Cursor Object's description attribute returns information about each of the result columns of a query. The type_code compares equal to one of Type Objects defined below. Type Objects may be equal to more than one type code (e.g. DATETIME could be equal to the type codes for date, time and timestamp columns).

      mxODBC returns more detailed description about type codes in the description attribute. See the datatypes section for details. The type objects are only defined for compatibility with the DB API standard and other database interfaces.

      The module exports the following constructors and singletons:

      Date(year,month,day)
      This function constructs a DateTime object holding the given date value. The time is set to 0:00:00.

      Time(hour,minute,second)
      This function constructs a DateTimeDelta object holding the given time value.

      Timestamp(year,month,day,hour,minute,second)
      This function constructs an object holding a time stamp value.

      DateFromTicks(ticks)
      This function constructs a DateTime object holding the date value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details).

      TimeFromTicks(ticks)
      This function constructs a DateTimeDelta object holding a time value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details).

      TimestampFromTicks(ticks)
      This function constructs a DateTime object holding a time stamp value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details).

      Binary(string)
      This function constructs a buffer object pointing to the (long) string value. On Python versions without buffer objects (prior to 1.5.2), the string is taken as is.

      STRING
      This type object is used to describe columns in a database that are string-based: SQL.CHAR, SQL.BINARY.

      BINARY
      This type object is used to describe (long) binary columns in a database: SQL.LONGVARCHAR, SQL.LONGVARBINARY (e.g. LONG, RAW, BLOB, TEXT).

      NUMBER
      This type object is used to describe numeric columns in a database: SQL.DECIMAL, SQL.NUMERIC, SQL.DOUBLE, SQL.FLOAT, SQL.REAL, SQL.DOUBLE, SQL.INTEGER, SQL.TINYINT, SQL.SMALLINT, SQL.BIT, SQL.BIGINT.

      DATETIME
      This type object is used to describe date/time columns in a database: SQL.DATE, SQL.TIME, SQL.TIMESTAMP.

      ROWID
      This type object is used to describe the "Row ID" column in a database. mxODBC does not support this special column type and thus no type code is equal to this type object.

      SQL NULL values are represented by the Python None singleton on input and output.

      Note: Usage of Unix ticks for database interfacing can cause troubles because of the limited date range they cover.

    Exceptions

    The module uses the DB API 2.0 exceptions layout.

    Error
    Baseclass for all other exceptions related to database or interface errors.

    You can use this class to catch all errors related to database or interface failures. error is just an alias to Error needed for DB-API 1.0 compatibility.

    Error is a subclass of exceptions.StandardError.

    Warning
    Exception raised for important warnings like data truncations while inserting, etc.

    Warning is a subclass of exceptions.StandardError. This may change in a future release to some other baseclass indicating warnings.

    InterfaceError
    Exception raised for errors that are related to the interface rather than the database itself.

    DatabaseError
    Exception raised for errors that are related to the database.

    DataError
    Exception raised for errors that are due to problems with the processed data like division by zero, numeric out of range, etc.

    OperationalError
    Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc.

    IntegrityError
    Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails.

    InternalError
    Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc.

    ProgrammingError
    Exception raised for programming erros, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, performing operations on closed connections etc.

    NotSupportedError
    Exception raised in case a method or database API was used which is not supported by the database, e.g. requesting a .rollback() on a connection that does not support transaction or has transactions turned off.

    This is the exception inheritance layout:

    StandardError
    |__Warning
    |__Error
       |__InterfaceError
       |__DatabaseError
          |__DataError
          |__OperationalError
          |__IntegrityError
          |__InternalError
          |__ProgrammingError
          |__NotSupportedError
        

    A hint for troubles with Warning exceptions:
    If some function does not work because it always raises a Warning exception, you could either turn off warning generation completely by recompiling the module using the DONT_REPORT_WARNINGS flag (see Setup[.in] for explanations) or step through the source to find the exact location where the exception occurs and replace the Py_SQLCheck() with Py_SQLErrorCheck(). If you do the latter please inform me of the change so that I can include it in future versions.

    If you are interested in the exact mapping of SQL error codes to exception classes, have a look at the mxODBC_ErrorCodeTranslations array defined in mxODBC.c.

    Constants

    SQL
    Singleton object which defines nearly all values available in the ODBC 3.5 header files. The "SQL_" part of the ODBC symbols is omitted, e.g. SQL_AUTOCOMMIT is available as SQL.AUTOCOMMIT.

    errorclass
    Dictionary mapping SQL error code strings to exception objects used by the module.

    sqltype
    Dictionary mapping SQL type codes (these are returned in the type field of cursor.description) to type strings. All natively supported SQL type codes are included in this dictionary. The contents may vary depending on whether the ODBC driver/manager defines these types or not.

    CHAR, VARCHAR, LONGVARCHAR, BINARY, VARBINARY, LONGVARBINARY, TINYINT, SMALLINT, INTEGER, BIGINT, DECIMAL, NUMERIC, BIT, REAL, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP [, CLOB, BLOB, TYPE_DATE, TYPE_TIME, TYPE_TIMESTAMP, UNICODE, UNICODE_LONGVARCHAR, UNICODE_VARCHAR, WCHAR, WVARCHAR, WLONGVARCHAR]
    ODBC 2.0 type code integers for the various natively supported SQL types. These map to integers as returned in the type field of cursor.description.

    They are also available through the SQL singleton, e.g. SQL.CHAR. The dictionary sqltype provides the inverse mapping.

    The codes mentioned in square brackets are optional and only available if the ODBC driver/manager supports a later ODBC version than 2.5.

    Note that mxODBC has support for unknown SQL types: it returns these types converted to strings. The conversion is done by the ODBC driver and may be driver dependent.

    threadsafety
    Integer constant stating the level of thread safety the interface supports. It is usually set to 1, meaning that each thread must use its own connection.

    apilevel
    String constant stating the supported DB API level. This is currently set to '1.0', but will be changed to '2.0' as soon as full DB API 2.0 is reached.

    paramstyle
    String constant stating the type of parameter marker formatting expected by the interface. This is set to 'qmark', since ODBC interfaces always expect '?' to be used as positional placeholder for variables in an SQL statement. Parameters are bound to these placeholders in the order they appear in the SQL statement, e.g. the first parameter is bound to the first question mark, the second to the second and so on.

    BIND_USING_SQLTYPE, BIND_USING_PYTHONTYPE
    Integer values returned by connection.bindmethod.

    SQL type binding means that the interface queries the database to find out which conversion to apply and which input type to expect, while Python type binding looks at the parameters you pass to the methods to find out the type information and then lets the database apply any conversions.

    The bind method is usually set at compilation time, but can also differ from database to database when accessing them via an ODBC manager.

    DATETIME_DATETIMEFORMAT, TIMEVALUE_DATETIMEFORMAT, TUPLE_DATETIMEFORMAT, STRING_DATETIMEFORMAT
    Integer values which are used by connection.datetimeformat and cursor.datetimeformat.

    mxODBC can handle different output formats for date/time values on a per connection and per cursor basis. See the documentation of the two attributes for more information.

    EIGHTBIT_STRINGFORMAT, MIXED_STRINGFORMAT, UNICODE_STRINGFORMAT, NATIVE_UNICODE_STRINGFORMAT
    Integer values which are used by connection.stringformat and cursor.stringformat.

    mxODBC can handle different string conversion methods on a per connection and per cursor basis. See the documentation of the two attributes for more information.

    HAVE_UNICODE_SUPPORT
    Integer flag which is either 0 or 1 depending on whether mxODBC was compiled with Unicode support or not. Unicode support is only available in Python 1.6 or above.

    Debugging

    To simplify debugging the module I added debugging output in several important places. The feature is only enabled if the module is compiled with -DMAL_DEBUG and output is only generated if Python is run in debugging mode (use the Python interpreter flag '-d'). The debugging log file is named mxODBC.log. It will be created in the current working directory; messages are always appended to the file so no trace is lost until you explicitly erase the log file. If the log file can not be opened, the module will use stderr to do the reporting.

    To have the package compiled using the MAL_DEBUG define, prepend the distutils command mx_autoconf --enable-debugging to the build or install command. This will then enable the define and compile a debugging version of the code.

    Note that the debug version of the module is almost as fast as the regular build, so you might as well leave it enabled.

    Thread Safety & Thread Friendly

    mxODBC itself is written in a thread safe way. There are no module globals that are written to and thus no locking is necessary. Many of the underlying ODBC SQL function calls are wrapped by macros unlocking the global Python interpreter lock before doing the call and regaining that lock directly afterwards. The most prominent of those are the connection APIs and the execute and fetch APIs.

    So in general when using a separate database connection for each thread, you shouldn't run into threading problems. If you do, it is more likely that the ODBC driver is not 100% thread safe and thus not 100% ODBC compatible. Note that having threads share cursors is not a good idea: there are many very strange transaction related problems you can then run into.

    Unlocking the interpreter lock during long SQL function calls gives your application more responsiveness. This is especially important for GUI based applications, since no other Python thread can run when the global lock is acquired by one thread.

    Note: mxODBC will only support threading if you have built Python itself with thread support enabled. Python for Windows has this per default. Try: python -c "import thread" to find out.

    Thanks to Andy Dustman for bringing this to my attention. If you do run into mxODBC related threading problems, feel free to contact me.

    Transaction Support

    ODBC uses auto-commit on new connections per default. This means that all SQL statement executes will directly have an effect on the underlying database even in those cases where you would really back out of a certain modification, e.g. due to an unexpected error in your program.

    Important: mxODBC resets auto-commit to manual commit whenever it creates a new connection to reenable manual commit per default -- unless the constructor flag clear_auto_commit is set to 0 or the database does not provide transactions.

    Using a connection in manual commit mode means that all your commands are grouped in transactions: only the connection will see the changes it has made to the data in the database until an explicit connection.commit() is issued. The commit informs the database to write all changes done during the last transaction into the global data storage making it visible to all other users. A rollback on the other hand, tells the database to discard all modifications processed in the last transaction.

    New transactions are started in the following cases: creation of a new connection, on return from a .commit() and from .rollback().

    Note: Unless you perform an explicit connection.commit() prior to deleting or closing the connection, mxODBC will try to issue an implicit rollback on that connection before actually closing it. Errors are only reported on case you use the connection.close() method. Implicit closing of the connection through Python's garbage collection will ignore any errors occurring during rollback.

    Data sources that do not support transactions, such as Excel files, cause calls to .rollback() to fail with an NotSupportedError. mxODBC will not turn off auto-commit behaviour for these sources. The setting of the connection constructor flag clear_auto_commit has no effect in this case.

    Some databases for which mxODBC provides special subpackages such as MySQL don't have transaction support. For these, the .rollback() connection method is not available at all (i.e. calling it produces an AttributeError) and the clear_auto_commit flag on connection constructors defaults to 0.

    Stored Procedures

    Even though mxODBC does not yet support the .callproc() API, it does allow calling stored procedures which return data using result sets.

    The ODBC syntax for calling a stored procedure is as follows:

            {call procedure-name [([parameter][,[parameter]]...)]}
          
    Using the above syntax, you can call stored procedures through one of the .executeXXX() calls, e.g. cursor.execute("{call myprocedure(?,?)}", (1,2)) will call the stored procedure myprocedure with the parameters 1, 2.

    Note: You should not use any output statements such as "PRINT" in the stored procedures, since this will cause at least some ODBC drivers (notably the MS SQL Server one) to turn the output into an SQL error which causes the execution to fail. On the other hand, these error messages can be useful to pass along error conditions to the Python program, since the error message string will be the output of the "PRINT" statement.

Supported Data Types

    mxODBC allows to use two different input variable binding modes (also see the Constants section):

    Binding Mode Value of connection.bindmethod Comments
    SQL type binding BIND_USING_SQLTYPE The database is asked for the appropriate data type and mxODBC tries to convert the input variable into that type.
    Python type binding BIND_USING_PYTHONTYPE mxODBC looks at the type of the input variable and passes its value to the database directly; conversion is done by the ODBC driver/manager as necessary.

    The default depends on the settings with which the ODBC subpackage was compiled. If not indicated in the subpackage section, it is set to SQL type binding, since this offers more flexibility.

    Note that for SQL type binding to be possible, mxODBC needs a working ODBC SQLDescribeParam() API implementation. This is checked at connect time and the binding style adjusted to Python type binding, if mxODBC cannot rely on SQLDescribeParam().

    Output Conversions and SQL Type Input Binding

    The following data types are used per default for output variable mapping and for BIND_USING_SQLTYPE input variable binding.

    SQL Type Python Type Comments
    CHAR, VARCHAR, LONGVARCHAR (TEXT, BLOB or LONG in SQL) String The conversion truncates the string at the SQL field length. The handling of special characters depends on the codepage the database uses.

    Some database drivers/managers can't handle binary data in these column types, so you better check the database's capabilities with the incluced test script first before using them.

    WCHAR, WVARCHAR, WLONGVARCHAR (TEXT, BLOB or LONG in SQL) String or Unicode Whether a Python string or Unicode object is returned is dependent on the setting of the .stringformat attribute of the cursor fetching the data. Unicode is only available in case mxODBC was compiled with Unicode support.

    Note that currently none of the tested ODBC drivers natively supports Unicode. The only way to store Unicode data in a non-Unicode aware database is by encoding it using e.g. UTF-8.

    The conversion truncates the string at the SQL field length.

    BINARY, VARBINARY, LONGVARBINARY (BLOB or LONG BYTE in SQL) String Truncation at the SQL field length. These can contain embedded 0-bytes and other special characters.

    Handling of these column types is database dependent. Please refer to the database's documentation for details.

    Many databases store the passed in data as-is and thus make these columns types useable as storage facility for arbitrary binary data.

    TINYINT, SMALLINT, INTEGER, BIT Integer Conversion from the Python integer (a C long) to the SQL type is left to the ODBC driver/manager, so expect the usual truncations.
    BIGINT Long Integer Conversion to and from the Python long integer is done via string representation since there is no C type with enough precision to hold the value. Because of this, you might receive errors indicating truncation or errors because the database sent string data that cannot be converted to a Python long integer.

    Not all SQL databases implement this type, MySQL is one that does.

    DECIMAL, NUMERIC, REAL, FLOAT, DOUBLE Float Conversion from the Python float (a C double) to the SQL type is left to the ODBC driver/manager, so expect the usual truncations.
    DATE DateTime instance  or
    ticks  or
    (year,month,day) or
    String
    While you should use DateTime instances, the module also excepts ticks (Python numbers indicating the number of seconds since the Unix Epoch; these are converted to local time and then stored in the database) and tuples (year,month,day) on input.

    The type of the return values depends on the setting of cursor.datetimeformat and whether the ODBC driver/manager does return the value with proper type information. Default is to return DateTime instances in case it returns the data with SQL DATE type code.

    TIME DateTimeDelta instance or
    tocks or
    (hour,minute,second) or
    String
    While you should use DateTimeDelta instances, the module also excepts tocks (Python numbers indicating the number of seconds since 0:00:00.00) and tuples (hour,minute,second) on input.

    The type of the return values depends on the setting of cursor.datetimeformat and whether the ODBC driver/manager does return the value with proper type information. Default is to return DateTimeDelta instances in case it returns the data with SQL TIME type code.

    TIMESTAMP DateTime instance or
    ticks or
    (year,month,day, hour,minute,second) or
    String
    While you should use DateTime instances, the module also excepts ticks (Python numbers indicating the number of seconds since the epoch; these are converted to local time and then stored in the database) and tuples (year,month,day, hour,minute,second) on input.

    The type of the return values depends on the setting of cursor.datetimeformat and whether the ODBC driver/manager does return the value with proper type information. Default is to return DateTime instances in case it returns the data with SQL TIMESTAMP type code.

    Unsupported Type String mxODBC will try to fetch data from columns using unsupported data types as strings. This is likely to always work but may cause unwanted conversions and or truncations or loss of precision.

    Input binding to these columns is also done via strings (or str()ified versions of the input data).

    If you find this annoying, please contact the author to find out if there are any plans to add the missing type.

    Note that output bindings can only be done using the above mapping by mxODBC if the database correctly identifies the type of the output variables.

    Some ODBC drivers return data using different type codes than the ones accepted for input, e.g. a database might accept a time value, convert it internally to a timestamp and then return it in a subsequent SELECT as timestamp value. mxODBC cannot know that the value only contains valid time information and no date information and thus converts the output data into a DateTime instance instead of a DateTimeDelta instance (which would normally be returned for time values).

    Use the incluced test script to check for this behaviour. It tests many common column types and reports the outcome.

    Output Type Converter Functions

    The above table defines the standard mapping mxODBC does when fetching output data from the database.

    You can modify this mapping on-the-fly by defining a cursor converter function which takes three arguments and has to return a 2-tuple:

    def converter(position,sqltype,sqllen):
        # modify sqltype and sqllen as appropriate
        return sqltype,sqllen
    
    # Now tell the cursor to use this converter:
    cursor.setconverter(converter)
    
    or 3-tuple:
    def converter(position,sqltype,sqllen):
        # modify sqltype and sqllen as appropriate, provide binddata as
        # input (e.g. for file names which should be used for file binding)
        return sqltype,sqllen,binddata
    
    # Now tell the cursor to use this converter:
    cursor.setconverter(converter)
    

    The converter function is called for each output column prior to the first fetch executed on the cursor. The returned values are then interpreted as defined in the above table.

    sqltype is usually one of the SQL data type constants, e.g. SQL.CHAR for string data, but could also have database specific values. mxODBC only understands the ones defined in the above table, so this gives you a chance to map user defined types to ones that Python can process.

    sqllen is only used for string data and defines the maximum length of strings that can be read in that column (mxODBC allocates a memory buffer of this size for the data transfer).

    Returning 0 as sqllen will result in mxODBC dynamically growing the data transfer buffer when fetching the column data. This is sometimes handy in case you want to fetch data that can vary in size.

    binddata is optional and only needed for some special sqltypes. It will be used in future versions to e.g. allow binding output columns to files which some ODBC drivers support (the column data is transfered directly to a file instead of copied into memory).

    Cursors will use the connection's .converter attribute as default converter. It defaults to None, meaning that no converter function is in effect. None can also be used to disable the converter function on a cursor:

    # Don't use a converter function on the cursor
    cursor.setconverter(None)
    

    You can switch converter functions even in between fetches. mxODBC will then reallocate and rebind the column buffers for you.

    Python Type Input Binding

    These mappings are used for input variables in BIND_USING_PYTHONTYPE mode (see the ODBC documentation for more information on how the C datatypes are mapped to SQL column types). Output variables are treated by applying the conversions defined in the previous section.

    Python Type SQL C Data Type Comments
    String CHAR (char *) The conversion truncates the string at the SQL field length. The string may also contain binary data, if the ODBC driver/manager supports this.
    Unicode WCHAR (wchar_t *) The conversion truncates the string at the SQL field length. Note that not all ODBC drivers/managers support Unicode data at C level.

    This binding is used for all cursors which do not have the .stringformat attribute set to EIGHTBIT_STRINGFORMAT. In EIGHTBIT_STRINGFORMAT mode (default), Unicode objects are converted to a 8-bit strings first and then passed to the ODBC driver/manager.

    Buffer BINARY (char *) The conversion truncates the string at the SQL field length. The string may contain binary data. If the ODBC driver/manager doesn't support processing binary data using strings, wrap the data object using Python buffers (via the buffer() constructor) to have mxODBC use a binary SQL type for interfacing to the driver/manager.
    Integer SLONG (signed long) Conversion from the signed long to the SQL column type is left to the ODBC driver/manager, so expect the usual truncations.
    Long Integer CHAR (char *) Conversion from the Python long integer is done via the string representation since there usually is no C type with enough precision to hold the value.
    Float DOUBLE (double) Conversion from the Python float (a C double) to the SQL column type is left to the ODBC driver/manager, so expect the usual truncations.
    DateTime TIMESTAMP Converts the DateTime instance into a TIMESTAMP struct defined by the ODBC standard. The ODBC driver may use the time part of the instance or not depending on the SQL column type (DATE or TIMESTAMP).
    DateTimeDelta TIME Converts the DateTimeDelta instance into a TIME struct defined by the ODBC standard. Fractions of a second will be lost in this conversion.
    Buffer (new in Python 1.5.2) CHAR (char *) No conversion is done, the character buffer is passed directly to the ODBC driver/manager. The buffer has to be a single segment buffer.
    Any other type CHAR (char *) Conversion is done by calling str(variable) and then passing the resulting string value to the ODBC driver/manager.

    Auto-Conversions

    While you should always try to use the above Python types for passing input values to the respective columns, the package will try to automatically convert the types you give into the ones the database expects when using the BIND_USING_SQLTYPE bind method (see the Constants section), e.g. an integer literal '123' will be converted into an integer 123 by the interface if the database requests integers.

    The situation is different in BIND_USING_PYTHONTYPE mode: the Python type used in the parameter is passed directly to the database, thus passing '123' or 123 does make a difference and could result in an error from the database.

    Note: You shouldn't rely on this feature in case you intend to move to another DB API compliant database module, since this is an extension to the DB API standard.

    Unicode

    Starting with version 2.0.0, mxODBC also supports Unicode objects to interface with databases. It turns out though, that native Unicode is not supported by most existing ODBC drivers. The only ODBC driver, I've heard of, which does support native Unicode is the MS SQL Server ODBC driver. Unfortunately, I don't have access to a working installation to test mxODBC against (feedback would be appreciated).

    Even though you can usually not expect the ODBC driver to handle native Unicode, you can still take advantage of the auto-conversion mechanisms in mxODBC to simulate Unicode capabilities.

    mxODBC provides several different run-time configurations to deal with passing Unicode to and fetching it from an ODBC driver. The .stringformat attribute of connection and cursor objects allows defining how to convert string data into Python objects and vice-versa.

    Unicode conversions to and from 8-bit strings in Python usually assume the Python default encoding (which is ASCII unless you modify the Python installation). Since the database may be using a different encoding, mxODBC allows defining the encoding to be used on a per-connection basis. The .encoding attribute of connection objects is writeable for this purpose. Its default value is None, meaning that Python's default encoding is to be used. You can change the encoding by simply assigning a valid encoding name to the attribute. Make sure, though, that Python supports the encoding (you can test this using the unicode() builtin).

    The default conversion mechanism used in mxODBC is EIGHTBIT_STRINGFORMAT (Unicode gets converted to 8-bit strings before passing the data to the driver, output is always an 8-bit string), the default encoding Python's default encoding.

    To store Unicode in a database, one possibility is to use the UNICODE_STRINGFORMAT and set the encoding attribute to e.g. 'utf-8'. mxODBC will then convert the Unicode input data to UTF-8, store this in the database and convert it back to Unicode during fetch operations. Note however that UTF-8 encoded data usually takes up more room in the database than the Unicode equivalent, so may experience data truncations which then cause the decoding process to fail.

    Comments

    The above SQL types are provided by the module as SQL type code integers as attributes of the singleton SQL, so that you can decode the value in cursor.description by comparing it to one of those constants. A reverse mapping of integer codes to code names is provided by the dictionary sqltype.

    Note: You may run into problems when using the tuple versions for date/time/timestamp arguments. This is because some databases (noteably MySQL) want these arguments to be passed as strings. mxODBC does the conversion internally but tuples turn out as: '(1998,4,6)' which it will refuse to accept. The solution: use DateTime[Delta] instances instead. These convert themselves to ISO dates/times which most databases (including MySQL) do understand.

    To check the ODBC driver/manager capabilities and support for the above column types, use the incluced test script.

Functions & Helpers

    For some subpackages, mxODBC also defines a few helpers which you can use to query additional information from the ODBC driver or manager.

    DataSources()

    Returns a dictionary mapping data source names to descriptions.

    This function is only available for ODBC manager and some ODBC drivers which have internal ODBC manager support, e.g. IBM's DB2 ODBC driver.

    In addition to subpackage specific helpers, mxODBC also provides a few additional functions available through the top-level ODBC package. These are:

    format_resultset(cursor,headers=(), colsep=' | ', headersep='-', stringify=repr)
    Fetch the result set from cursor and format it into a list of strings (one for each row):
    	    -header-
    	    -headersep-
    	    -row1-
    	    -row2-
    	    ...
    	    

    headers may be given as list of strings. If not given, or too short, the function will add numbered columns as appropriate.

    Columns are separated by colsep; the header is separated from the result set by a line of headersep characters.

    The function calls stringify to format the value data returned by the driver into a string. It defauls to repr().

    print_resultset(cursor, headers=())
    Pretty-prints the current result set available through cursor.

    headers can be given as list of column header strings.

Subpackages

    This section includes some specific notes for preconfigured setups.

    Note for Windows users: You should always use the ODBC.Windows subpackage and access the databases through the MS ODBC Driver Manager. The other packages provide Unix based interfaces to the databases.

    IMPORTANT:
    Even though the setups include many database specific settings, you should always check the paths and filenames used in the corresponding Setup file because these depend on your specific installation.

    You may also want to consult Paul Boddie's mxODBC Configuration page which has some details about specific database backends he has used with mxODBC.

    ODBC.Adabas -- SuSE Adabas D

    The SuSE Linux distribution ships with a free personal edition of Adabas (available in form of RPMs from SuSE). A commercial version is also available, though I'd suggest first trying the personal edition.

    If you want to trim down the interface module size, try linking against a shared version of the static ODBC driver libs. You can create a pseudo-shared lib by telling the linker to wrap the static ones into a single shared one:

    ld -shared --whole-archive odbclib.a libsqlrte.a libsqlptc.a \
       -lncurses -o /usr/local/lib/libadabasodbc.so
    	    

    Note: The ADABAS ODBC driver returns microseconds in the timestamp fraction field. Because of this the Setup includes a define to do the conversion to seconds using a microseconds scale instead of the ODBC standard nanosecond scale (see the history section for more details on this problem).

    The module has been tested under Linux 2 with Adabas D 6.1.1. Linux Edition. Since the ODBC driver for Adabas on Linux also provides the DriverConnect() API it is also exposed by the package (even though the driver itself is not an ODBC manager).

    ODBC.MySQL -- MySQL + MyODBC

    MySQL is a SQL database for Unix and Windows platforms developed by TCX. It is free for most types of usage (see their FAQ for details) and offers good performance and stability. To download MySQL and the ODBC driver MyODBC, check the www.mysql.org website.

    There is one particularity with the ODBC driver for MySQL: all input parameters are being processed as string -- even integers and floats. The ODBC driver implements the necessary conversions. mxODBC uses the Python Type binding method to bind the input parameters; see the Constants section.

    Since MySQL does not support transactions, clearing the auto-commit flag on connections (which is normally done per default by the connection constructors) will not work. The subpackage simply uses auto-commit mode as default. You can turn this "work-around" off by editing MySQL/Setup and removing the switch DONT_CLEAR_AUTOCOMMIT if the feature should become available.

    When using the MyODBC RPMs available from www.mysql.org, please be sure to also have the MySQL shared libs RPM and the MySQL development RPM installed.

    Important: The setup MySQL + MyODBC showed some serious memory leaks on my SuSE 5.3 (libc5) Linux machine; the leakage does not occur when accessing the Linux server from a Windows client or when running the setup using SuSE 6.2 (glibc6). The leakage could be libc or compiler related -- it is not mxODBC or MyODBC related. Please check your setup using the included test.py script.

    ODBC.PostgreSQL -- PostgreSQL [untested]

    PostgreSQL is a free SQL database for Unix. The subpackage setup was originally provided by Cyril Elkaim. I updated it to PostgreSQL 7. The package links directly to the PostgreSQL ODBC driver. An alternative setup would be connecting to the database via one of the free ODBC managers iODBC or unixODBC also supported by mxODBC.

    Even though I don't get any link errors and the ODBC driver does report errors, I haven't been able to actually connect to the PostgreSQL database. The driver keeps giving me strange error messages and I haven't yet figured out what exactly is causing them. Some of them indicate that the ODBC driver is not 100% ODBC 2.0 compatible, others are simply not leading in any obvious direction or seem to be caused by some communication sub-system used by the driver.

    If you can get the setup to work, I'd appreciate feedback.

    ODBC.Solid -- Solid Server

    Solid Tech. offers a free personal edition of their database for Linux in addition to the standard server and webserver licenses. More information about prices, licenses and downloads is available on their website.

    BTW: The Solid Server's low-level database API uses ODBC as interface standard (most other vendors have proprietary interfaces), so mxODBC should deliver the best performance possible.

    Note: The Solid ODBC driver leaves out some of the ODBC 2.0 catalog functions. The missing ones are: SQLTablePrivileges, SQLColumnPrivileges, SQLForeignKeys, SQLProcedures, SQLProcedureColumns. You won't be able to use the corresponding cursor methods.

    The setup for Solid was kindly donated by Andy Dustman from ComStar Communications Corp. He also found a long standing bug that needed fixing.

    ODBC.SybaseASA -- Sybase Adaptive Server Anywhere

    Sybase Adaptive Server Anywhere comes with its own ODBC driver against which mxODBC can link directly. The included Setup is for version 7 of the server.

    In case you are running Linux, Sybase has some information on its web-site about the ASA ODBC driver and its setup on Linux. This whitepaper should also be of interest.

    You can also use the OpenLink drivers for Sybase ASA: copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.

    In any case, you should also consult Paul Boddie's mxODBC Configuration page for the Sybase Adaptive Server Anywhere. It includes valuable information about the setup.

    Thanks to Paul Boddie and Sam Rushing for helping in getting the package together.

    ODBC.SybaseASE -- Sybase Adaptive Server Enterprise

    Note that you will first have to get the Sybase ASE ODBC drivers from Merant in order to use this subpackage -- Sybase ASE does not include ODBC drivers (it's a completely different product than Sybase ASA). This whitepaper has some details about ODBC connectivity of ASE.

    Gary Pennington from Sun Microsystems reported that the Merant evaluation drivers work with Sybase Adaptive Server 11.5 on Solaris 2.6.

    You can also use the OpenLink drivers for Sybase ASE: copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.

    In any case, you should also consult Paul Boddie's mxODBC Configuration page for the Sybase Adaptive Server Enterprise version. It includes valuable information about the setup.

    ODBC.Oracle -- Oracle

    Oracle for Unix doesn't ship with Unix ODBC drivers. You can get them from Merant or OpenLink though (see the Hints section for URLs).

    Once you have installed the ODBC drivers following the vendor's instructions, run make -f Makefile.pre.in boot in the Oracle/ subdirectory, enable the appropriate set of directives in Setup and then run make to finish the compilation.

    Using Merant drivers is reported to work. Shawn Dyer (irin.com) has kindly provided the setup for this combination and some additional notes:

    ...we also set the following environment variables:

    LD_LIBRARY_PATH= both the oracle lib path and the Merant library path
    ODBCINI= the odbc.ini file in the Merant install

    Once you talk to the Merant odbc driver, it seems to be a simple matter of setting up the ODBC data source name in their .ini file that has that stuff. At that point you can talk to any of their ODBC drivers you have installed.

    To use the OpenLink driver setup instead copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.

    ODBC.Informix -- Informix SQL Server

    Informix for Unix doesn't come with Unix ODBC drivers, but there a few source for these: Informix sells the driver under the term "Informix CLI"; Merant and OpenLink also support Informix through their driver suites (see the Hints section for URLs).

    Note: There is also a free Informix SDK available for a few commercial Unix platforms like HP-UX and Solaris. It includes the needed ODBC libs and header files (named infxcli.h and libifsql.a).

    Once you have installed the ODBC drivers following the vendor's instructions, enable the appropriate set of directives in Setup, run make -f Makefile.pre.in boot in the Informix/ subdirectory, and then run make to finish the compilation.

    To use the OpenLink driver setup instead copy Setup.in to Setup and enable the OpenLink section in Setup before compiling.

    Gilles Lenfant has mailed me these instructions which you might find useful in setting up the Informix subpackage:

    In addition to the change to Setup (or Setup.in) file edition before the "make -f Makefile.pre.in boot", I made it compile and run with the following changes in the Informix section of the "Setup" file (according to the latest "Informix ODBC Driver Programmer's Manual").

    Note thas this book must be read carefully for the setup of the Informix related environment variables: The user must have $INFORMIXDIR (informix client software root) set. and his LD_LIBRARY_PATH must include "$INFORMIXDIR/lib:$INFORMIXDIR/lib/esql:$INFORMIXDIR/lib/cli"

    Compiling mxODBC requires Informix client SDK (compile time free download from intraware.com) and ESQL/C libraries (client run-time libraries provided with the server CD - not free).

    In addition, fixes and tuning must be done in $INFORMIXDIR/etc/odbcinst.ini, and the user must configure his data sources in $HOME/.odbc.ini or $ODBCINI file.

    ODBC.DB2 -- IBM DB2 Universal Database

    IBM provides a free personal edition of the powerful DB2 database engine which happens to use ODBC as native C level interface.

    This package interfaces directly to the ODBC driver included in the UNIX edition of the database. If you want to access DB2 from Windows NT, please use the Windows subpackage.

    There is one quirk you should watch out for: in order to connect to the IBM DB2 database the DB2INSTANCE environment variable must be set to the name of the DB2 instance you would like to connect to.

    There may be more environment variables needed, please check the scripts that come with DB2 called db2profile (for bash) or db2cshrc (for C shell) which set the environment variables. Without having these set, mxODBC will fail to load and give you a traceback:

    Traceback (most recent call last):
    ...
        from mxODBC import *
    ImportError: initialization of module mxODBC failed
    (mxODBC.InterfaceError:failed to retrieve error information (line 6778,
    rc=-1))
    	      
    Unfortunately, the provided shell scripts are buggy, so simply sourcing them won't do any good; you will have to carefully create your own. A typical problem is that the scripts set LIBPATH or LD_LIBRARY_PATH which then causes the following traceback when trying to load mxODBC:
    Traceback (most recent call last):
    ...
    ImportError: from module mxODBC.so No such file or directory
    	      

    Also note that DB2 needs to be explicitly told that you want to connect to the database using ODBC. This is done by binding the IBM CLI driver against the database. Please consult the IBM DB2 documentation for details.

    If you want to use the DriverConnect() API, you'll have to configure the IBM ODBC driver's data source INI file which is named db2cli.ini and usually found in the same directory as the above script files.

    ODBC.DBMaker -- CASEMaker's DBMaker Database

    DBMaker has a small lean and mean SQL database that comes with an ODBC driver. This subpackage interfaces directly to that ODBC driver.

    Note: DBMaker's ODBC driver doesn't have all the advertised SQL catalog functions (the priviledge functions are missing) and also doesn't support the .nativesql() method. It does provide a DriverConnect() API, though, which might be useful for connecting to databases across a network.

    mxODBC currently does not support the use of file object for in- and output of large objects. This may change in a future version though (the needed techniques are already in place).

    The subpackage links against the DBMaker driver without problems. Testing has only been preliminary though, but since even CASEMaker advertises mxODBC as Python interface for DBMaker, I guess that they did some testing already ;-).

    ODBC.Windows -- Windows ODBC Driver Manager

    mxODBC compiles on Windows using VC++ and links against the Windows ODBC driver manager. The necessary import libs and header files are included in the VC++ package but are also available for free in the Microsoft ODBC SDK (now called MDAC SDK). Note that the latter is usually more up-to-date.

    Compiling the module has to be done in the usual VC++ way (see the Windows install instructions), producing a DLL named mxODBC.pyd. All necessary files are located in the Windows/ subdirectory of the package, the main target being mxODBC.cpp.

    Martin Sckopke (gis.ibfs.de) reported that when connecting to an ADABAS database through the ODBC manager, no 'host:' prefix to the DSN is necessary. He has the module running on Windows NT and is interfacing to ADABAS D and Oracle 8.0.x without problems.

    Stephen Ng (grossprofit.com) who contributed the previous compiled versions and the VC6 project files is using mxODBC to interface to MS Access and now MS SQL Server.

    Notes:

    Use the DriverConnect() API to connect to the data source if you need to pass in extra configuration information such as names of log files, etc.

    If you have installed the win32 extensions by Mark Hammond et al. you'll run into a naming collision: there already is an odbc module (all lowercase letters) in the distribution that could be loaded instead of the ODBC package (all uppercase letters) depending on your configuration.

    AFAIK, there are at least three ways to change this:

    1. rename odbc.pyd from the win32 extension to e.g. win32odbc.pyd
    2. add the ODBC package to a directory that appears before the directory in which odbc.pyd lives
    3. change the Python path (sys.path) to make sure the ODBC package is found first.

    The subpackage defaults to SQL type binding mode, but reverts to Python type binding in case the connection does not support the SQLDescribeParam() API. MS Access is one candidate for which this API is not useable.

    ODBC drivers working on single files, e.g. the MS Excel file driver, usually do not support transactions. mxODBC will not clear auto-commit for these drivers (it may sometimes still be necessary to set the clear_auto_commit flag in the connect constructors to 0).

    If you have troubles with multiple cursors on connections to MS SQL Server the MS Knowledge Base Article INF: Multiple Active Microsoft SQL Server Statements has some valuable information for you. It seems that you'll have to force the usage of server side cursors to be able to execute multiple statements on a single connection to MS SQL Server. According to the article this is done by setting the connection option SQL.CURSOR_TYPE to e.g. SQL.CURSOR_DYNAMIC:

         dbc.setconnectoption(SQL.CURSOR_TYPE,SQL.CURSOR_DYNAMIC)
    	    
    Thanks to Damien Morton for tracking this down and digging up the MS KB article.

    If you want to connect to a file data source (without having to configure it using the ODBC manager), you can do so by using the FILEDSN= parameter instead of the DSN= parameter:

         DriverConnect('FILEDSN=test.dsn;UID=test;PWD=test')
    	    
    This is sometimes useful when you want to dynamically setup a data source, e.g. a MS Access database.

    For more information about the FILEDSN-keyword and the other Windows ODBC manager features, see the Microsoft SQLDriverConnect() documentation.

    ODBC.iODBC -- Unix iODBC Driver Manager maintained by OpenLink

    mxODBC compiles against the iODBC version available from the www.iODBC.org site.

    Note: Use the DriverConnect() API to connect to the data source if you need to pass in extra configuration information such as names of log files, etc.

    I've successfully tested the interface with iODBC-2.50.3 on Linux. Note however that this version has a bug in the SQLDataSources() API which causes the mxODBC API DataSources() to return truncated data. Later versions may not have this problem.

    Hint: You may experience problems when trying to connect to MySQL via MyODBC hooked to iODBC in case you are using the binary RPMs available from www.mysql.org. For some reason, the MyODBC driver does not reference the MySQL shared libs it needs to connect to the MySQL server and there's no way to tell iODBC to load two shared libs. Here's a hack which will allow you to create an import lib which solves the problem on Linux:

    rm -f /usr/local/lib/libmyodbc.so
    ld -shared --whole-archive /usr/local/lib/libmyodbc-2.50.34.so \
                               /usr/lib/libmysqlclient.so.10 \
               -o /usr/local/lib/libmyodbc.so
    ldconfig
    	    

    ODBC.unixODBC -- Unix unixODBC Driver Manager

    mxODBC compiles against the current unixODBC version available from the www.unixODBC.org site.

    Note: Use the DriverConnect() API to connect to the data source if you need to pass in extra configuration information such as names of log files, etc.

    ODBC.EasySoft -- EasySoft ODBC-ODBC Bridge 2000

    EasySoft has developed an ODBC-ODBC bridge which allows you to connect to e.g. a MS SQL Server running on an NT box from your Linux web-server.

    The included setup was developed for the beta 0.2.4 of that bridge but should also work fine against the release version. You can download it via the product page or via FTP as trial version. The personal editions are said to available for free in the near future.

    Remember to download setups for client (Linux in the example) and server (NT in the example).

    Not mentioned here...

    Check out the list of links to other resources in the next section. Creation of new sub packages is explained in the installation guide.

Hints & Links to other Resources

    Running mxODBC from a CGI script

    ODBC drivers and managers are usually compiled as a shared library. When running CGI scripts most HTTP daemons (aka web servers) don't pass through the path for the dynamic loader (e.g. LD_LIBRARY_PATH) to the script, thus importing the mxODBC C extension will fail with unresolved symbols because the loader doesn't find the ODBC driver/manager's libs.

    To have the loader find the path to those shared libs you can either wrap the Python script with a shell script that sets the path according to your system configuration or tell the HTTP daemon to set or pass these through (see the daemon's documentation for information on how to do this; for Apache the directives are named SetEnv and PassEnv).

    Freezing mxODBC using py2exe

    Thomas Heller has written a great tool which is based on the new distutils support in Python 2.0 and later. The tool allows you to freeze your application into a single standalone Windows application and is called py2exe.

    When freezing mxODBC you may experience problems with py2exe related to py2exe not finding the DLLs needed by mxODBC. In this case you have to help py2exe to find the correct subpackage for Windows. This can be done by adding -i mx.ODBC.Windows,mx.DateTime to the py2exe command line: python py2exe -i mx.ODBC.Windows,mx.DateTime yourapp.py. After doing so, py2exe should have no problem finding the files mxODBC.pyd and mxDateTime.pyd needed by mx.ODBC.Windows and mx.DateTime.

    Note: Freezing mxODBC together with an application and redistributing the resulting executables requires that you have obtained a proper license from eGenix.com permitting you to redistribute mxODBC along with a product. Please see the License section for more information.

    Other Resources...

    There are several resources available online that should help you getting started with ODBC. Here is a small list of links useful for further reading:

    Microsoft ODBC Site
    Microsoft is constantly developing new forms of database access. For a close up on what they have come up recently take a look at their ODBC site. Note that they now call their ODBC SDK "Microsoft Data Access Components SDK" (MDAC). It does not only focus on ODBC but also on OLE DB and ADO.

    Note: If you are not happy about the size of the SDK download (over 31MB), you can also grab the older 3.0 SDK which might still be available from a FTP server. Look for "odbc3sdk.exe" using e.g. FTP Search.

    Microsoft also supports a whole range of (desktop) ODBC drivers for various databases and file formats. These are available under the name "ODBC Desktop Database Drivers" (search the MS web-site for the exact URL) [wx1350.exe] and also included in the more up-to-date "Microsoft Data Access Components" (MDAC) archive [mdac_typ.exe].

    Microsoft ODBC FAQ
    This list has a few interesting links into the Microsoft ODBC site. If you're looking for the latest SQL Server or Oracle ODBC drivers this is the place to look first.

    ODBC Documentation
    The ODBC documentation is included in the free MS MDAC SDK which you can download from their ODBC site.

    PDF versions of the MS ODBC SDK docs reformatted for the Solid Server (it uses ODBC as its native API) are available from SolidTech.

    CorVu List of ODBC drivers
    A collection of available ODBC driver packages. This should be the first place to look in case you are searching for OBDC connectivity to your database.

    Looking for Windows ODBC drivers ?
    Microsoft supports a whole range of (desktop) ODBC drivers for various databases and file formats. These are available under the name "ODBC Desktop Database Drivers" (search the MS web-site for the exact URL) [wx1350.exe] and also included in the more up-to-date "Microsoft Data Access Components" (MDAC) archive [mdac_typ.exe].

    Last time I checked it included ODBC drivers for: Access, dBase, Excel, Oracle, Paradox, Text (flat file CSV), FoxPro, MS SQL Server.

    If you need to connect to databases running on other hosts, please contact the database vendor or check the CorVu list of ODBC drivers.

    Looking for Unix ODBC drivers ?
    If you want to run mxODBC in a Unix environment and your database doesn't provide an Unix ODBC driver, you can try the drivers sold by Merant (formerly Intersolv). They have 30-day evaluation packages available. To see if their ODBC driver package supports your client/server setup check these two matrices: server based or client based solutions.

    Another source for commercial ODBC drivers is OpenLink. To see if they support your client/server setup check this matrix. They are giving away 2-client/10-connect licenses for free.

    For a fairly large list of sources for ODBC drivers have a look on the ODBC driver page provided by the CorVu Cooperation. They also have some informative pages that describe common database functions and operators which are helpful.

    If you would like to connect to a database for which you don't have a Unix ODBC driver, you can also try the ODBC-ODBC bridge from EasySoft which redirects the queries to e.g. the NT ODBC driver for the database.

    Alternatively, you could write a remote client (in Python) that communicates with your database via a WinNT-Box. Most databases provide Win95/NT ODBC drivers so you can use mxODBC with the Windows ODBC manager. This method is not exactly high-performance, but cheaper (provided you can come up with a running version in less than a day's work, that is...). The Python standard lib module SocketServer.py should get you going pretty fast. Protocol and security are up to you, of course.

Examples

    Here is a very simple example of how to use mxODBC. More elaborate examples of using Python Database API compatible database interfaces can be found in the Database Topic Guide on http://www.python.org/. Andrew Kuchling's introduction to the Python Database API is an especially good reading. There are also a few books on using Python DB API compatible interfaces, some of them cover mxODBC explicitly.

    On Unix:

    >>> import mx.ODBC
    >>> import mx.ODBC.iODBC
    >>> db = mx.ODBC.iODBC.DriverConnect('DSN=database;UID=user;PWD=passwd')
    >>> c = db.cursor()
    >>> c.execute('select count(*) from test')
    >>> c.fetchone()
    (305,)
    >>> c.tables(None,None,None,None)
    8
    >>> mx.ODBC.print_resultset(c)
    Column 1 | Column 2 | Column 3       | Column 4 | Column 5     
    ---------------------------------------------------------------
    ''       | ''       | 'test'         | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'test1'        | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'test4'        | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testblobs'    | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testblobs2'   | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testdate'     | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testdates'    | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testdatetime' | 'TABLE'  | 'MySQL table'
    >>> c.close()
    >>> db.close()
    >>>
    	

    On Windows:

    >>> import mx.ODBC
    >>> import mx.ODBC.Windows
    >>> db = mx.ODBC.Windows.DriverConnect('DSN=database;UID=user;PWD=passwd')
    >>> c = db.cursor()
    >>> c.execute('select count(*) from test')
    >>> c.fetchone()
    (305,)
    >>> c.tables(None,None,None,None)
    8
    >>> mx.ODBC.print_resultset(c)
    Column 1 | Column 2 | Column 3       | Column 4 | Column 5     
    ---------------------------------------------------------------
    ''       | ''       | 'test'         | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'test1'        | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'test4'        | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testblobs'    | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testblobs2'   | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testdate'     | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testdates'    | 'TABLE'  | 'MySQL table'
    ''       | ''       | 'testdatetime' | 'TABLE'  | 'MySQL table'
    >>> c.close()
    >>> db.close()
    >>>
    	

    As you can see, mxODBC has the same interface on Unix and Windows which makes it an ideal basis for writing cross-platform database applications.

    Note: When connecting to a database with transaction support, you should explicitly do a .rollback() or .commit() prior to closing the connection. In the example this was omitted since the database backend MySQL does not support transactions.

Package Structure

    [ODBC]
           [Adabas]
                  dbi.py
                  dbtypes.py
                  showdb.py
           [DB2]
                  dbi.py
                  dbtypes.py
           [DBMaker]
                  dbi.py
                  dbtypes.py
           Doc/
           [EasySoft]
                  dbi.py
                  dbtypes.py
           [Informix]
                  dbi.py
                  dbtypes.py
           [Misc]
                  dbinfo.py
                  proc.py
                  test.py
           [MySQL]
                  dbi.py
                  dbtypes.py
           [Oracle]
                  dbi.py
                  dbtypes.py
           [PostgreSQL]
                  dbi.py
                  dbtypes.py
           [Solid]
                  dbi.py
                  dbtypes.py
           [SybaseASA]
                  dbi.py
                  dbtypes.py
           [SybaseASE]
                  dbi.py
                  dbtypes.py
           [Windows]
                  dbi.py
                  dbtypes.py
           [iODBC]
                  dbi.py
                  dbtypes.py
           [mxODBC]
                  dbi.py
                  dbtypes.py
                  makeSQLCodes.py
           [unixODBC]
                  dbi.py
                  dbtypes.py
           LazyModule.py
           ODBC.py
    	

    Entries enclosed in brackets are packages (i.e. they are directories that include a __init__.py file). Ones with slashes are just simple subdirectories that are not accessible via import.

Manual Installation Procedure

    First, you will have to install another extension I wrote called mxDateTime which is part of the eGenix.com mx BASE package. Be sure to always fetch the latest release of both packages, since I always synchronize the two whenever something changes. If you only update one of them, you may run into problems later.

    After that is installed and running, download the eGenix.com mx COMMERCIAL package which is available from the same location and unzip it to a temporary directory.

    You may also want to download the ODBC reference manuals from SolidTech which I used to develop this package.

    Note: The subpackages section contains database specific installation notes. You may want to read those first before continuing the setup.

    Next, follow the steps below for each of the subpackage that you intend to use.

    If none of them fits your database configuration, create a new directory MyDatabase first and proceed as follows (please send me the modified Setup and mxODBC.h files for inclusion in future releases).

    Unix:

    Instructions for compiling the C extension(s) on Unix platforms:

    Be sure that you have the Python development files installed on your machine (these are usually located in /usr/local/lib/pythonX.X/config/ or /usr/lib/pythonX.X/config/). If you don't, look for a python-devel or similar installation archive for your OS distribution and install this first.

    1. cd to the database subpackage directory (e.g. mx/Adabas/, mx/MySQL/)

      If you are setting up a new database subpackage, copy all files from the mx/mxODBC directory to the subpackage directory and then edit the mxODBC.h header file and include the appropriate header files for your database

    2. Boot the Makefile system by running

      make -f Makefile.pre.in boot

    3. Edit the file Setup:

      Fix the include directories, libs and lib paths (this file uses the same syntax as the Modules/Setup file that you edit to configure Python); the file also contains some database specific hints

    4. Build the database interface by running

      make

    5. Get your database running, and try to connect with the new module.

      If you get an error like 'unresolved symbol: SQLxxx', try to add a '-DDONT_HAVE_SQLxxx' flag to the setup line in Setup and recompile (make clean; make).

    6. Run the test.py script in the mx/ODBC/Misc/ package directory from the temporary directory (the one where setup.py lives).

    7. If all goes well, edit the file mxCOMMERCIAL.py and include the new subpackage in the list of Python modules and C extension modules. The file has instructions on how to do this.

      I would appreciate hearing about the changes you make in mxCOMMERCIAL.py, since I plan to enhance the installer to provide auto-detection of installed ODBC drivers. If you provide such information, you will be entitled for a free upgrade to the next version of mxODBC.

    8. Finally, run python setup.py install from the temporary directory. The distutils-based installer will then compile and install the files in your standard Python package tree (usually under pythonX.X/lib/site-packages).

    You now have a new package called ODBC with subpackages for each of your ODBC databases. Accessing a particular database is done by calling the connection constructor of the database subpackage, e.g. connection = ODBC.Adabas.Connect('host:DB','user','passwd'). You can use multiple databases at the same time using this mechanism.

    If you plan to use the dbi.py abstraction module for a particular database, you can access the specific version for that database by importing ODBC.<database name>.dbi, e.g. ODBC.Adabas.dbi. Note that the dbi module is deprecated by DB API 2.0 and no longer maintained.

    Please post any bug reports, questions etc. to the db-sig@python.org (see the Python Website for details on how to subscribe) or mail them directly to me.

Testing the Database Connection

    The package includes a rudimentary test script that checks some of the database's features. As side effect this also provides a good regression test for the mxODBC interface.

    To start the test, simply run the script in ODBC/Misc/test.py. It will generate a few temporary tables (named mxODBC0001, mxODBC0002, etc; no existing tables will be overwritten) and then test the interface - database communication. The tables are removed after the tests have run through. Here is some typical output:

    MySQL 3.22.20a with MyODBC 2.50.22a en direct:

    Subpackage Name [MySQL]: 
    DriverConnect arguments [DSN=test;UID=root]: 
    Clear AUTOCOMMIT  ? (1/0) [0] 
    Run tests continuously to check for leaks ? (y/n) [n] 
    Show driver type information ? (y/n) [n] 
    Output file [stdout]: 
    
    Test suite:
     Connecting to the database.
     Connected to DBMS MySQL 2.50.22 using driver myodbc.dll 2.50.22; ODBC 02.50
     BIGINT column type with 64bits              : supported
     BINARY column type                          : type not supported
     BIT column type                             : supported
     BLOB column type with binary data           : supported
     BLOBs with >32kB ASCII data                 : supported
     CHAR column type                            : supported
     CHAR column type with binary data           : supported
     CHAR padding (with spaces)                  : not supported
     DATE column type                            : supported
     DATETIME column type                        : supported
     DATETIME with string values                 : not supported
     DECIMAL column type                         : supported
     DOUBLE column type                          : supported
     FLOAT column type                           : supported
     IMAGE column type                           : type not supported
     INT column type                             : supported
     INT column type using array processing      : supported
     LONG BYTE column type                       : type not supported
     LONG BYTE with >32kB ASCII data             : type not supported
     LONG column type                            : type not supported
     LONGs with >32kB ASCII data                 : type not supported
     LONGs with binary data                      : type not supported
     MEMO column type                            : type not supported
     MEMOs with >32kB ASCII data                 : type not supported
     MEMOs with binary data                      : type not supported
     TEXT column type                            : supported
     TIME column type                            : supported
     TIMESTAMP column type                       : supported
     TIMESTAMP data having fractions             : not supported
     Transactions                                : not supported
     VARBINARY column type                       : type not supported
     VARCHAR column type                         : type not supported
     VARCHAR column type with binary data        : type not supported
     Variable Bind Method                        : Python Type
     Disconnecting.
    	

Support

    eGenix.com is providing commercial support for this package. If you are interested in receiving information about this service please see the eGenix.com Support Conditions.

Copyright & License

    © 1997-2000, Copyright by Marc-André Lemburg; All Rights Reserved. mailto: mal@egenix.com

    © 2000-2004, Copyright by eGenix.com Software GmbH, Langenfeld, Germany; All Rights Reserved. mailto: info@egenix.com

    This software is covered by the eGenix.com Commercial License Agreement. The text of the license is also included as file "LICENSE" in the package's main directory.

    Please note that using this software in a commercial environment is not free of charge. You may use the software during an evaluation period as specified in the license, but subsequent use requires the ownership of a "Proof of Authorization" which you can buy online from eGenix.com.

    Please see the eGenix.com mx Extensions Page for details about the license ordering process.

    By downloading, copying, installing or otherwise using the software, you agree to be bound by the terms and conditions of the eGenix.com Commercial License Agreement.

History & Future

    Things that still need to be done:

    • Provide some examples.

    • Implement full DB API 2.0 standard compliance, i.e. implement .callproc(), .nextset() and (a fully compliant) .executemany().

    • Add an alias ODBC.Manager defaulting to the standard ODBC manager on the given platform (iODBC or unixODBC on Unix and the Windows ODBC manager on WinXX).

    • Fix the problem with the iODBC subpackage dumping core when used with some ODBC drivers in some combinations (e.g. iODBC 2.50.3 + MyODBC 2.50.28 shows this behaviour on RedHat -- at least on the one I tested it on). It is not directly related to mxODBC, because the core dump occurrs in iODBC when it tries to unload an ODBC driver shared lib.

    • Replace the current method of reporting warnings using Python exceptions with a cursor/connection list attribute .warnings. This will then hold the current set of warnings and get updated by each new action taken on the connection or cursor, so testing for c.warnings after and c.execute() will work as method for verifying correct operation.

    Changes from 2.0.5 to 2.0.6:

    • Added DB-API attribute threadsafety.

    Changes from 2.0.4 to 2.0.5:

    • Fixed a bug in the native Unicode handling code for long object columns. mxODBC now also supports reading and writing Unicode to and from longchar and ntext columns for MS SQL Server.

    Changes from 2.0.3 to 2.0.4:

    • Fixed a bug in the native Unicode handling code. This only affected situations where the ODBC driver supports native Unicode -- the MS SQL Server ODBC driver is an example of such a driver.

    • Rearranged the code for fetching SMALLINTs, TINYINTs and BITs to do the fetching using a C short rather than a C long. Some ODBC drivers are buggy in that they fail to correctly convert these small integers to longs.

    Changes from 2.0.2 to 2.0.3:

    • Enhanced test script (mx.ODBC.Misc.test) to show available data sources for ODBC managers before connecting.

    • Added docs about usage of mxODBC with Thomas Heller's py2exe.

    • Added work-around for some ODBC drivers which don't initialize the buffers for small integers correctly. This lead to wrong results when reading data from SMALLINT columns with these drivers. Writing to these column types did not pose a problem. Thanks to Villiam Manera for bringing this to my attention again (mxODBC already included a hint to the problem, but no actual fix for it).

    Changes from 2.0.1 to 2.0.2:

    • Added a new section on stored procedures to the docs. Thanks to Jim Vickroy for finding out about the usage of PRINT in MS SQL Server stored procedures.

    • Tested under Python 2.1.

    Changes from 2.0.0 to 2.0.1:

    • Fixed the documentation w/r to debugging builds. The default configuration does contain the debugging code. See the debugging section for details on how to build a debugging version.

    • Changed handling of warnings during database connect: warnings will no longer be reported as exceptions. This should allow connects to e.g. MS SQL Server to work without having to recompile mxODBC on Windows.

    • Fixed a bug in the handling of direct execution (.executedirect()). Input parameters passed to .executedirect() will be bound using Python bindings.

    • Reverted a change introduced in 2.0.0 which prevered the ODBC 3.5 data types for date/time values over the old ones. The MS ODBC manager has massive problems with these, which cause all queries involving date/time columns to fail. mxODBC 2.0.1 will no again use the old type codes, which work fine with all ODBC drivers/managers.

    • Added experimental SAP DB support as new subpackage mx.ODBC.SAPDB.

    • Added more tests to the test script.

    Changes from 1.1.1 to 2.0.0:

    • mxODBC will try to fetch data from columns having an unsupported data type, e.g. a driver specific one, as strings.

    • mxODBC tests whether a data source is capable of fetching columns using SQLGetData (needed for LONG columns and ones with unkown size) at any column position in the SELECT. Some drivers don't support this, e.g. the one for MS SQL Server, and previously only allowed a SELECT columns order where LONG columns appear at the end of the SELECT column list.

      Note that even though mxODBC should now be capable of dealing with improper order, you'll get higher performance by ordering the columns in such a way that only the last columns on the list need to be fetched via SQLGetData while the others can be bound directly to a memory buffer and fetched without extra call.

    • In case a driver posts multiple error messages for one error condition, all errors are now cleared and written to the debug log (if enabled). The MS drivers are ones that provide more information this way. Only the last error message is returned in form of an exception to the program. To see the others, run your script using 'python -d script.py' and then check the mxODBC.log file.

    • Added unixODBC subpackage. This is still experimental though.

    • Added .executemany(). The method currently only accepts lists of tuples as input, not any sequence of sequences as stated in the DB API 2.0 spec. This will be implemented in a later version of mxODBC.

    • Merged the mxODBC documentation with the DB API 2.0 spec.

    • Added a new sub module to all subpackages, dbtypes.py, which is imported by all __init__.py package initializers. It includes the type objects and constructors as defined in the DB API 2.0. If you have setup your own subpackages, you will have to update them accordingly.

    • Changed all the catalog cursor methods to accept keyword parameters. The methods use reasonable defaults for all parameters not explicitly given.

    • Added cursor type converter functions which allow you to define output data type mappings on-the-fly.

    • Changed from using the old and deprecated UCHAR, SDWORD, etc. datatypes to the new portable (?) ones with SQL prefix, e.g. SQLINTEGER, SQLCHAR, etc. I hope this doesn't bomb any existing setup. If it does, mxODBC.h has some defines to revert back to the old style: define ODBC_COMPATIBILITY_DEFINES in your Setup file.

    • Added DB2 subpackage. Not really tested yet though, because I haven't found time setup a working DB2 installation. Compiling and linking does work.

    • Added PostgreSQL subpackage provided by Cyril Elkaim.

    • Added cursor.freeset() method.

    • Added cursor.prepare() method and cursor.command attribute.

    • Changed the return values of all new .executeXXX() methods to always return None. Use cursor.rowcount to query the number of available/affected rows (as defined in the DB API 2.0 spec).

    • Modified the internal Python long to string conversion to match the new Python string format for longs (without trailing 'L').

    • Modified the internals of mxODBC to always return empty lists or None when fetching beyond the end of a result set. Some drivers return a sequence error when this is done. The change was needed to synchronise behaviour among drivers. Thanks to Rob Riggs for bringing this to my attention.

    • Updated docs on .getinfo(): the code uses an int cast to decode the data into an integer -- it is not necessarily a 32-bit value and byte order is also platform dependent. If you want to be sure decoding is done correctly, please use the string return value and decode using the struct module.

    • Added setup notes for Sybase Adaptive Server Anywhere. Thanks to Paul Boddie for the very useful report.

    • Moved the whole package under the 'mx' package tree. It is part of the eGenix.com mx COMMERCIAL distribution.

    • Changed the mxODBC license model. See the license section for details.

    • Fixed dbtypes.py _HybridValue object to use id() instead of comparing against the self.values tuple. It should now work correctly with arbitrary objects.

    • Added a compile time switch to fix a case where you'd sometimes get a function sequence error when executing an already prepared cursor a second time. mxODBC usually always closes the cursor before execution. If CLOSE_AFTER_FETCH_ONLY is defined during compilation, it now only does so in case it has to prepare a new command on the cursor or a fetch was in progress. Thanks to Paul Boddie for bringing this to my attention.

    • Changed the connect behaviour: AUTO-COMMIT will only be cleared if the database supports transactions. The clear_auto_commit flag is only of importance for databases which do support transactions and can then be used to leave the default COMMIT behaviour as defined by the driver.

    • Added connection.nativesql().

    • Fixed a bug with the compile time switch DONT_REPORT_WARNINGS and added better support to the error generating mechanism that ensures that warnings are not reported when this is not meaningful or could harm program flow (some ODBC drivers like older versions of the MySQL driver return warnings without SQL_SUCCESS_WITH_INFO return value).

    • Added .executedirect() and direct flag to .executemany() which causes mxODBC to use SQLExecDirect() for one-shot executes. This can sometimes result in noticable performance boosts. It will disable caching of the command though.

    • Changed .execute() to use SQLExecDirect() for calls which don't use parameters.

    • Added hooks to work-around a bug in the MS Access ODBC driver: columns with type VARCHAR have a precision of 2GB since Win2K SP1 and mxODBC tried to allocate a 2GB buffer for these columns. It now uses SQLGetData() for all columns >64kB. Thanks to Joe Salmeri for bringing this to my attention.

    • Added experimental support for Unicode. This needs Python 2.0 (it should work with Python 1.6 too). To enable the Unicode support, compile the subpackages with -DWANT_UNICODE_SUPPORT.

    • Changed SQL class to a SQL singleton object. Lookup is now done using a C function which searches in static C data. This should enable sharing of the data between processes and reduce the memory footprint of mxODBC.

    • Added DataSources() API to some subpackagesm notably iODBC and DB2.

    • Added cursor.closed attribute.

    • Added DBMaker subpackage.

    • Split the Sybase package into two packages: SybaseASA and SybaseASE since these two products are fairly different. You may have to update your imports or install an import redirector to the correct package.

    • Added a .stringformat attribute to connection and cursor objects. The connection attribute is used as default for all cursors created using the connection object and defines how to handle string conversions (8-bit string, mixed or Unicode objects).

    • Added HAVE_UNICODE_SUPPORT to detect the availability of Unicode support at runtime.

    • Added .encoding attribute to connection objects.

    • Added buffer Python type input handling. This allows passing binary data to data sources which don't support binary data in CHAR columns. Thanks to James Storey for bringing this up.

    • The .executeXXX() methods will now clear the statement cache in case the execute did not succeed. This will prevent function sequence errors from being reported in case the cursor is being cached for multiple use.

    Changes from 1.1.0 to 1.1.1:

    • Added some helper functions.

    • Changed the default path for the EasySoft subpackage.

    • Updated the MySQL and iODBC subpackages to default to the latest available versions (myodbc-2.50.23 and libiodbc-2.50.2). Fixed some warnings that were introduced by the new versions and added a fix for the bug in iODBC's type header file.

    • Changed the naming scheme for tables generated by the test script. This should simplify removing left-over tables after (hopefully unlikely) crashes.

    • Moved the test.py script to the Misc/ subdir.

    • Fixed a bug in the None/NULL handling code. Thanks to Hoikkala Arto for reporting this and pointing me to the error location.

    Changes from 1.0.1 to 1.1.0:

    • Tweaked the header and Setup files a little to enable more APIs by default (e.g. the DriverConnect() API is now always enabled on Windows).

    • Fixed a typo in the DriverConnect() function.

    • Updated the test.py script and documented it.

    • Reformatted the docs somewhat.

    • Added DB API 2.0 module globals threadsafety, paramstyle and apilevel.

    • Changed the way .rollback() works for databases that do not support transactions.

      Some of these databases simply ignored the rollback without any notice to the programmer (e.g. MySQL).

      mxODBC now does a check to make sure that the connection supports transactions and raises a NotSupportedError in case .rollback() is called on such a connection (using AUTOCOMMIT also means disabling transactions !).

    • Added workarounds for two MySQL problems: BLOBs got truncated and Time fields were misinterpreted (the seconds fraction part confused MySQL). The MyODBC driver still leaks memory though...

    • Intergrated most of the compile time defines for specific subpackage problems into mxODBC.h. They don't have to be defined manually (for VC++) or via Setup anymore.

    • Added implicit rollback when a connection is closed. This is only done in case the driver reports an illegal transaction state upon disconnecting (some drivers do an implicit rollback prior to a disconnect). Errors are reported when doing the close explicitly by calling connection.close(); closing via garbage collection will not raise any exceptions.

    • Changed the way errors are reported: since some Python level APIs use several different ODBC APIs only the first error occurring is set and all others are ignored. In previous versions, later errors overwrote earlier ones.

    • Following the DB API 2.0, connection.rollback() can now be disabled altogether using a new compile time switch DONT_HAVE_TRANSACTIONS (see the Setup file for details). Enabling this switch will cause all transaction related calls to be skipped and the .rollback() method to not be defined anymore (using it will result in an AttributeError).

      Changed the default setup for MySQL: since MySQL does not support transactions, the new switch DONT_HAVE_TRANSACTIONS is defined per default in the Setup file.

    • Added alias connect() for Connect() needed for DB API 2.0 compliance.

    • Changed the dbi modules to be more in sync with what the interface really does. The type objects were adjusted to conform to the DB API 1.0 and DB API 2.0 standard; new constructors are available for time stamps and all supported type codes are now correctly translated to proper type objects. Some extraneous type codes were deleted (e.g. DECIMAL, NUMERIC, etc.) since these caused naming conflicts. You can now access these codes via dbi.SQL.<typecodename;>.

      Note that usage of dbi.py is deprecated starting with DB API 2.0. It will no longer be supported in future versions of mxODBC.

    • Changed cursor.rowcount and cursor.colcount to return -1 as long as no execute() method has been called on the cursor to be in sync with DB API 2.0. They previously returned 0.

    • Changed the exception raised when a .fetchXXX() is called prior to doing an .execute() from InterfaceError to ProgrammingError.

    • Added new intermediate layer in the exception layout (DatabaseError) and a new subclass NotSupportedError. The layout is now fully DB API 2.0 compatible.

    • Changed the exceptions for SQL error status IM001 (not supported by driver), S1C00 (driver not capable), .rollback() exception when transactions are off or not supported from OperationalError to NotSupportedError.

    • Added EasySoft setup. Thanks to Murray Todd Williams for bringing it to my attention.

    • Added connection.bindmethod and supporting code in the interface. MySQL will not use Python type binding per default; this should make number conversions a little more reliable -- at least now its MySQL that chooses how to convert them to strings...

    • Removed the dummy cursor.nextset() method. It will be added back again when the functionality is supported by mxODBC.

    • Input and output data for CHAR, VARCHAR, LONGVARCHAR columns is no longer truncated by mxODBC at \0 characters. The database driver/manager may still truncate the data though, so you better check it with the test script included in mxODBC (see the Testing section).

    • Added cursor.gettypeinfo().

    • Added a compile time switch to enable SQL error log flushing (FLUSH_SQL_ERRORS). See mxODBC.h for details.

    • Fixed the Python binding code to properly setup the SQL bind information too. It should now work correctly with MS ACCESS (which is very picky about these things...).

    • For SQL type binding we need a working SQLDescribeParam() API. This is now checked at connect time and the bind method parameter adjusted accordingly.

    Changes from 1.0.0 to 1.0.1:

    • Removed the SQL codes NULL_HENV, NULL_HDBC, NULL_HSTMT from the SQL class. These are handles (and pointers on some platforms) that aren't used by the exposed ODBC APIs anyway and can cause compiler warnings.

    • Andy Dustman reported that the module seems to work correctly with threads enabled. He also reported memory leakage when using mxODBC/Solid with threads which does not occur when used without threads. The exact source of this leakage is not yet determined.

    • Fixed a bug in the routine for fetching LONGVARCHAR column data. The previous version introduced a bug due to a change in the default setting of the ADABAS workaround.

    • The caching of SQL statements in cursor.execute() can now be deactivated using a compile time switch (see Setup.in for details). The cache was also extended to do string comparism in case the passed string object is not the same.

    • Changed the string representation of the MySQL connection and cursor objects to indicate that MySQL is being interfaced to.

    • Renamed compile time flag HAVE_ODBC_DRIVER_CONNECT to HAVE_SQLDriverConnect.

    Changes from 0.9.0 to 1.0.0:

    • Fixed a problem with catalog functions reporting warnings about SQLCancel being treated as SQLFreeStmt/Close.

    • Corrected the TIMESTAMP tuples to actually only use 6 entries as documented. The internal routines used to use 7 entries (the last one being nanoseconds). They now add the nanoseconds to the second part which is returned as a float.

      The strange thing about the timestamp fractions is that the ODBC standard manuals say they represent nanoseconds while an older SQL manual from IBM states they should in fact be microseconds. I've now adopted the ODBC POV, but am still not sure what is right and what wrong.

      Since I couldn't make up my mind, I've introduced a compile time switch to set things up to use microseconds instead of nanoseconds: USE_MICROSECOND_FRACTIONS. See Setup.in for more information.

      Some databases seem to use the old standard too: at least ADABAS does. The switch is defined in the ADABAS Setup per default.

    • Added some extra checks for conversion to Unix ticks. An over/underflow will now raise an OverflowError.

    • Added support for fetching fields which have an unknown size (other than LONGs). Some catalog functions return such fields for e.g. MySQL which is probably an error but one that can easily be worked around.

    • Enlarged the buffers for column names. ANSI SQL (and many databases out there) define a limit of 18 characters for column and table names (this gives a limit of 37 characters for 'table.column' descriptions). Some databases also allow longer names, so I did two things: turned the limit into a compile time option (see mxODBC.h for details) and raised the prior limit I had used (19) to 37 characters. Thanks to Tino Wildenhain (peacock.de) for pointing this out to me.

    • Added Sybase configuration donated by Gary Pennington from Sun Microsystems Ltd.

    • Upgrade Note: Moved the initialization of the SQL class to a separate source file. To upgrade, you'll have to adjust your Setup files to include the new source file mxSQLCodes.c. Have a look at Setup.in to see how it's done (or simply copy it to Setup and make the same changes to it that you did for your existing configuration). For Windows, you'll simply have to add mxSQLCodes.cpp from the Windows subdir to the project.

    • Added better support for SQL warnings: SQL state "00000" is now interpreted as success, even though the SQL API returned an error. The ODBC docs had a well hidden note about this.

    • Fixed some minor things to make the module compile with less warnings on WinXX with VC++. Thanks to Stephen Ng for the constructive feedback.

    • Fixed an allocation bug that resulted in corrupt memory areas. The bug was in the allocation routine for numeric columns. Thanks again to Stephen Ng for tracking this error down.

    • Added a possible workaround for a bug in MyODBC 2.50.17 which causes serious memory leakage (SQLCancel does not free up cursor resources properly). TCX recommends upgrading to the most recent versions of MySQL *and* MyODBC.

    • Added an untested Informix setup. The paths and settings are based on the files included in the free Informix SDK which is downloadable from their site.

    • Added connection.closed instance variable.

    • Changed the exception raised for invalid ranges in date/time tuples from InterfaceError to the standard ValueError.

    • Added description of used exceptions to the docs.

    • Added explicit variable and parameter unbinding in a few places. This should make memory leakage due to the drivers not taking care of this themselves less likely.

    • Fixed a possible memory leakage in cursor.execute() that could occur when passing a list of parameter tuples using data types that need to be auto-converted by mxODBC to the types the driver wants to see.

    • Made sure that when passing SQL_C_CHAR data to the driver the data is 0-terminated. This is not really needed since the driver should use the provided length field, but you never know...

    • Added another SQLFreeStmt to the execute loop for lists of tuples: although executing multiple SELECTs using different parameters without fetching any data is really useful the docs state that this call must be made.

    • Added an optional name argument to the cursor constructor to allow creating named cursors.

    • Added new cursor methods .getcursorname() and .setcursorname().

    • Added several small fixes submitted by Wolf Logan from searchbutton.com to make the module compile with less warnings on VC6.

    • Fixed a memory leakage in the code for caching the most recently executed SQL statement found by Wolf Logan. Many thanks go to him for tracking this bug.

    • Added a compile time option to turn of reporting warnings by means of raising an exception: DONT_REPORT_WARNINGS.

    • Changed the exception type for the SQL error code 01000 from InternalError to Warning. Thanks Wolf Logan for noticing this bugglet.

    • Added Python threading macros around typically long running SQL function calls.

    • Added yet more fixes to make compilation on Windows platforms easier.

    • Changed exceptions Error and Warnings to be subclasses of exceptions.StandardError. This intergrates them into Python's standard exception hierarchy.

    Changes from 0.8.1 to 0.9.0:

    • Added a detailed debugging facility. To use it, compile the module with flag -DMAL_DEBUG and run the interpreter in debugging mode (flag -d). Verbose output will then be written to mxODBC.log in the current working directory.

    • Fixed a bug that caused passing fractions of a second in TIMESTAMP fields to fail (thanks to Gert Degreef for pointing me to it).

    • Added a precompiled version of mxODBC for Windows which was kindly provided by Martin Sckopke (gis.ibfs.de).

    • Added more preconfigured subpackages and enhanced the documentation a bit.

    • Fixed a bug in the error handling routine. It caused very picky drivers/managers to not report any errors other than mxODBC.InterfaceError. This was the case for iODBC, BTW.

    • Important: Enabled error reporting for clearing AUTOCOMMIT. This may cause connections to fail that were working before. Especially MySQL is touched by this, since it currently does not implement transactions. There is a new compile switch DONT_CLEAR_AUTOCOMMIT to set the default value of the clear_auto_commit option on connection constructors to 0. It is used for the MySQL subpackage.

    • Moved the SQL_XXX defines to a class named SQL. This leaves the module dictionary in a much saner state. The class defines nearly all symbols that are included in ODBC 2.0. Thus, SQL_XXX values are now available as SQL.XXX.

    • Added connection.getinfo().

    • The iODBC subpackage is now tested and working. Well, at least for me it is.

    Changes from 0.8.0 to 0.8.1:

    • Added checks for closed cursors to all interface methods. They will now raise a ProgrammingError whenever you try to do actions on closed cursors. This makes the interface a lot safer against core dumps resulting from malfunctioning ODBC drivers that have problems handling operations on closed cursors (they should return an error, but instead simply dump core...).

    • Changed the exception that was being raised whenever you tried to use a cursor on a closed connection from InterfaceError to ProgrammingError (after all, it's not the interface's fault that something doesn't work ;).

    • Trying to create cursors on closed connections will now always raise a ProgrammingError.

    Changes from 0.7 to 0.8.1:

    • Fixed a bug in the init-function that caused an endless loop in case something goes wrong while initialising the module (e.g. if the mxDateTime module can't be found).

    • Added a work-around for get/setconnectoption to get them to work with ADABAS: previous versions dumped core whenever you tried to set a connection option.

    • Moved DECIMAL and NUMERIC types from integer to floating point conversion.

    • Fixed a bug that could cause wrong data to be send to the database when using passing integers to fields that expect floating point numbers. Thanks to Andy Dustman for pointing me at these.

    • Added conversion code that automatically converts strings to floats or integers as required. You can now pass e.g. a string for an INTEGER column and a routine similar to the builtin int() will do the conversion for you. Same for floats. It is still better to pass the "right" types as no extra conversion is done in this case.

    • Added a datetimeformat attribute to connection objects. This is used as default for all cursors created using the connection object.

    • Added tests to all methods that ensure an open connection. The ADABAS ODBC client libs dump core if the cursor executes commands on closed connections. This also makes garbage collecting stray cursors a lot safer.

    • Added InterfaceError. All interface related errors are now reported as InterfaceErrors.

    • Changed the package structure: There now is one subpackage per preconfigured database. This allows for easy inter-database communcation and also makes switching from one database to another less cumbersome.

    • Changed version numbering to major.minor.patchlevel.

    Changes from 0.6 to 0.7:

    • Documented cursor.datetimeformat.

    • Synchronized the module with my mxDateTime extension package.

    • Updated the dbi.py abstraction module. It is still not 100% compatible with what the API 1.0 specs say (the module does not return instances of dbiRaw and dbiDate), but I'm reluctant to fix it.

    • Added a cleanup function that get's called when the interpreter finalizes. It deallocates the ODBC environment.

    • Added __members__ attribute to cursors. This returns a list of known attributes (e.g. dir() looks for it and displays them).

    • Added optional fourth parameter clear_auto_commit to the connection constructor.

    • Date, time and timestamp can now also be processed via Python strings. The conversion is left to the ODBC driver/manager.

    • Fixed a bug in the allocation routines of both the connection and the cursor type that sometimes caused a segfault due to an ODBC error occurring while connecting.

    • Added a new API DriverConnect and renamed the old API Connect. The old one is still aliased to ODBC for DB API compliance and backward compatibility.

    • Fixed a refcount bug that prevented connection objects from begin garbage collected after a cursor was used on them (the cursor didn't dereference the connection object).

    Changes from 0.5 to 0.6:

    • ODBC is now a package. The name mxODBC is only used for the C extension (with the mx-prefix to prevent naming hassles). You can access the module by importing ODBC and the dbi-submodule as ODBC.dbi.


© 1997-2002, Copyright by Marc-André Lemburg; All Rights Reserved. mailto: mal@egenix.com